id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,640,310,208 | rust | cargo-rustdoc: `error: too many file operands` when experimental feature `-w` is passed without parameters | When i try to run this command : `cargo +nightly rustdoc -- -Z unstable-options -w`
Rustdoc returns this error :
```
error: too many file operands
```
Which confuses me. I mean, i didn't passed any file format, so how can there be too many of them ?
We should have a clearer error, something like this :
```
error: no output format given after `-w`
```
| T-rustdoc,A-diagnostics,C-bug,requires-nightly,D-incorrect,A-CLI | low | Critical |
2,640,324,207 | ollama | Response returns 'null' for 'finish_reason' | ### What is the issue?
I'm using the OpenAI .Net library to connect to Ollama, using the default llama3.2 model. I get an "Unknown ChatFinishReason value." error from the library. You can see in below code from ChatFinishReasonExtensions (from OpenAI lib) that the value returned by Ollama is null.

The finish reason should apparently never be null. Note that this only happens for requests that time out. In normal use, the value 'stop' is returned which is parsed correctly.
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
ollama version is 0.3.14 | bug,api | low | Critical |
2,640,325,329 | kubernetes | Need a migration plan for ExecProbeTimeout | We still need a migration plan for this. I'm in a time crunch for code freeze right now, but after v1.32 code freeze I will put together a plan and propose a timeline for removing this gate. As is, we need more of a heads up before removing this.
_Originally posted by @tallclair in https://github.com/kubernetes/kubernetes/issues/127995#issuecomment-2420075810_
KEP: https://github.com/kubernetes/enhancements/issues/1972
A try (It requires migration plan is done): https://github.com/kubernetes/kubernetes/issues/127995
| sig/node,lifecycle/frozen,needs-triage | low | Minor |
2,640,327,888 | react | Bug: eslint-plugin-react-hooks@5.0.0 only detects english component names | React version: eslint-plugin-react-hooks@5.0.0
## Steps To Reproduce
1. Create a functional component with a name such as `ÄndraVärde` that starts with a non english upper case letter
2. Run the linter
## The current behavior
Sample error:
> 23:20 error React Hook "useSelectedState" is called in function "ÄndraVärde" that is neither a React function component nor a custom React Hook function. React component names must start with an uppercase letter. React Hook names must start with the word "use" react-hooks/rules-of-hooks
## The expected behavior
The linting should allow non english component names, React does.
## The problem
Version 5.0.0 included the changes made in #25162 which modified the following method:
https://github.com/facebook/react/blob/e1378902bbb322aa1fe1953780f4b2b5f80d26b1/packages/eslint-plugin-react-hooks/src/RulesOfHooks.js#L43-L50
This code only allows english upper case letters `A-Z` which is not enough.
## Proposed solution
Use `.toUpperCase()` and compare the result:
```js
function isComponentName(node) {
return node.type === 'Identifier' && node.name[0] == node.name[0].toUpperCase();
}
```
This should work with a lot more languages at least. | Status: Unconfirmed | low | Critical |
2,640,343,398 | tauri | [bug] Error loading docs.google.com in a webview | ### Describe the bug
I'm trying to use the [Google Docs creation page](https://docs.google.com/document/u/0/create) within a Tauri webview. While the Google login challenge completes successfully, the Google Docs page fails to load afterward, showing this error message:

We believe the issue might be due to Google Docs attempting to download and install an extension to enable collaborative editing which may not be permitted by default in Tauri's webview environment.
Is there a configuration or workaround we can add to `tauri::WebviewBuilder` to address this loading issue?
### Reproduction
[This repository](https://github.com/sebhernandezr/googledocs-tauri) contains a basic Tauri app that opens a single webview pointing to `https://docs.google.com/document/u/0/create`. Upon launching the app, you’ll be prompted to log in with your Google account. After logging in, the webview attempts to load the Google Docs editor but fails to fully display the page.
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.6.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.13.1
- pnpm: 9.1.2
- yarn: 1.22.22
- npm: 10.5.2
[-] Packages
- tauri 🦀: 2.0.6
- tauri-build 🦀: 2.0.2
- wry 🦀: 0.46.3
- tao 🦀: 0.30.3
- tauri-cli 🦀: 2.0.2
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.0.2 (outdated, latest: 2.0.4)
[-] Plugins
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : not installed!
- tauri-plugin-store 🦀: 2.0.1
- @tauri-apps/plugin-store : not installed!
- tauri-plugin-updater 🦀: 2.0.2
- @tauri-apps/plugin-updater : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../../dist/apps/exam-web
- devUrl: http://localhost:3000/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
[Discord post](https://discord.com/channels/616186924390023171/1304008335883763712) | type: bug,help wanted,status: needs triage | low | Critical |
2,640,362,693 | pytorch | Support AC with graph break | ### 🐛 Describe the bug
As mentioned in this [blog](https://dev-discuss.pytorch.org/t/higher-order-operators-2023-10/1565), HigherOrderOperator does not support graph break inside the input/output function, because Dynamo cannot determine whether the operations inside the function are "safe". However, in most cases, graph break within the function is safe and acceptable. Is there a way to make Dynamo support these cases without falling back to eager mode? For example, this scenario frequently occurs in LLM models:
```
import torch
import torch.nn as nn
import flash_attn
class Attention(nn.Module):
def __init__(self):
super().__init__()
self.q_proj = nn.Linear(4096, 4096)
self.k_proj = nn.Linear(4096, 4096)
self.v_proj = nn.Linear(4096, 4096)
self.o_proj = nn.Linear(4096, 4096)
def forward(self, x):
def attn_forward(x):
q = self.q_proj(x)
k = self.k_proj(x)
v = self.v_proj(x)
q = q.view(x.shape[0], x.shape[1], 64, 64).transpose(1, 2)
k = k.view(x.shape[0], x.shape[1], 64, 64).transpose(1, 2)
v = v.view(x.shape[0], x.shape[1], 64, 64).transpose(1, 2)
return flash_attn.flash_attn_func(q, k, v)
return torch.utils.checkpoint.checkpoint(attn_forward, x, use_reentrant=False)
model = Attention()
model.cuda()
def custom_backand(gm, example_inputs):
print(gm)
return gm
opt_model = torch.compile(model, backend=custom_backand)
x = torch.randn(1, 32, 4096).cuda()
with torch.autocast("cuda", dtype=torch.bfloat16):
out = opt_model(x)
```
In LLM models, flash attention is often used along with activation checkpointing, which can cause the entire attention or decoder layer execution to fall back to eager mode.
Is there a way to make Dynamo support graph breaking within HigherOrderOperator, such as providing some APIs to mark these functions as safe?
I know that we can use PyTorch's SDPA (Scaled Dot-Product Attention) op as a replacement, but our users typically hard-code the use of flash attention in their code. I hope there is a way to achieve this without requiring significant changes to the user's code.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
PyTorch version: 2.5.0a0+git24dee99
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-2ubuntu1~20.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.31
Python version: 3.10.13 (main, Aug 25 2023, 13:20:03) [GCC 9.4.0] (64-bit runtime)
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.0a0+git417a076
[pip3] torchvision==0.18.0
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @ydwu4 @bdhirsh @yf225 | triaged,enhancement,oncall: pt2,module: dynamo,module: higher order operators,module: pt2-dispatcher,activation-checkpointing | low | Critical |
2,640,376,345 | pytorch | PyTorch defaults to using libuv but is built without support for it on Windows | ### 🐛 Describe the bug
```
File "C:\hostedtoolcache\windows\Python\3.12.7\x64\Lib\site-packages\torch\distributed\rendezvous.py", line 189, in _create_c10d_store
return TCPStore(
^^^^^^^^^
RuntimeError: use_libuv was requested but PyTorch was build without libuv support
```
### Versions
PyTorch 2.5
cc @seemethere @malfet @osalpekar @atalman @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | module: binaries,oncall: distributed,module: windows | low | Critical |
2,640,423,467 | tauri | [bug] documentPictureInPicture window does not apply appropriate display-mode: picture-in-picture CSS media tag | ### Describe the bug
As the title says, it seems that Tauri on Windows does not render the appropriate `@media (display-mode: picture-in-picture)` CSS rules correctly for documentPictureInPicture pop-outs. It works on Windows both on Edge and Chrome [according to the spec](https://developer.mozilla.org/en-US/docs/Web/API/Document_Picture-in-Picture_API/Using#target_styles_when_in_picture-in-picture_mode), but for some reason Tauri does not load / render the CSS rules correctly.
### Reproduction
Run the example from MDN in a Tauri app on Windows: https://mdn.github.io/dom-examples/document-picture-in-picture/
A PiP window will open correctly, but it won't show the "mute" button according to the CSS rules. An even simpler way to reproduce would be opening a documentPiP window with the following CSS code:
```css
@media (display-mode: picture-in-picture) {
body {
background: red;
}
}
```
Now opening any element in a window.documentPictureInPicture window should have a red background on Tauri, but does not.
### Expected behavior
The PiP window should render all `@media (display-mode: picture-in-picture)` rules, like it does on Chrome & Edge on Windows.
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
✔ WebView2: 130.0.2849.56
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.79.0 (129f3b996 2024-06-10)
✔ cargo: 1.79.0 (ffa9cf99a 2024-06-03)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.15.0
- npm: 10.7.0
[-] Packages
- tauri 🦀: 2.0.1
- tauri-build 🦀: 2.0.1
- wry 🦀: 0.44.1
- tao 🦀: 0.30.3
- @tauri-apps/api : 2.0.3
- @tauri-apps/cli : 2.0.5
[-] Plugins
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
- tauri-plugin-shell 🦀: 2.0.1
- @tauri-apps/plugin-shell : not installed!
- tauri-plugin-process 🦀: 2.0.1
- @tauri-apps/plugin-process : 2.0.0
- tauri-plugin-updater 🦀: 2.0.2
- @tauri-apps/plugin-updater : 2.0.0
- tauri-plugin-window-state 🦀: 2.0.1
- @tauri-apps/plugin-window-state : 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:8080/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,640,447,479 | pytorch | DISABLED test_aot_export_with_torch_cond (__main__.TestAOTExport) | Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aot_export_with_torch_cond&suite=TestAOTExport&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32636076771).
Over the past 3 hours, it has been determined flaky in 15 workflow(s) with 30 failures and 15 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aot_export_with_torch_cond`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/functorch/test_aotdispatch.py", line 4906, in test_aot_export_with_torch_cond
self.assertExpectedInline(
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/testing/_internal/common_utils.py", line 3003, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/expecttest/__init__.py", line 351, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/expecttest/__init__.py", line 316, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/expecttest/__init__.py", line 388, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.11/lib/python3.11/unittest/case.py", line 1253, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.11/lib/python3.11/unittest/case.py", line 703, in fail
raise self.failureException(msg)
AssertionError: 'def [284 chars]rg0_1]); gt = true_graph_0 = false_graph_0 = [188 chars]d_1)' != 'def [284 chars]rg0_1, 3, 4]); gt = true_graph_0 = false_grap[194 chars]d_1)'
def forward(self, arg0_1):
sum_1 = torch.ops.aten.sum.default(arg0_1)
gt = torch.ops.aten.gt.Scalar(sum_1, 4); sum_1 = None
true_graph_0 = self.true_graph_0
false_graph_0 = self.false_graph_0
- cond = torch.ops.higher_order.cond(gt, true_graph_0, false_graph_0, [arg0_1]); gt = true_graph_0 = false_graph_0 = arg0_1 = None
+ cond = torch.ops.higher_order.cond(gt, true_graph_0, false_graph_0, [arg0_1, 3, 4]); gt = true_graph_0 = false_graph_0 = arg0_1 = None
? ++++++
getitem = cond[0]; cond = None
add = torch.ops.aten.add.Tensor(getitem, 3)
add_1 = torch.ops.aten.add.Tensor(getitem, 4); getitem = None
return (add, add_1) : To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_CROSSREF=1 python test/functorch/test_aotdispatch.py TestAOTExport.test_aot_export_with_torch_cond
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `functorch/test_aotdispatch.py`
cc @clee2000 @ezyang @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 | triaged,module: flaky-tests,skipped,oncall: pt2,module: higher order operators,module: pt2-dispatcher | medium | Critical |
2,640,465,180 | vscode | node debugger also connects to externally executed node scripts | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
When I'm running a VS Code node application in debug mode, it always also connects to every node script that I call via execSync.
There are cases when this is should not be the case - e.g. when I run "npm install" on a different project, it sometimes fails when in debug mode. It took me quite a while to figure this out, and from what I have found in google, no-one else has noticed that this might be the root cause of their errors.
For my own project I have a workaround by unsetting NODE_OPTIONS in the process.env variable before calling the external node script.
Could there be a switch in VS Code with which this could be generally turned on or off?
Thanks! | feature-request,debug | low | Critical |
2,640,511,203 | pytorch | torch.argmax slow on CPU | ### 🐛 Describe the bug
torch.argmax is slow on CPU relative to the numpy implementation. I would expect these two to be approximately equally fast. Here is a small script to reproduce this:
```python
import torch
import numpy as np
from time import time
if __name__ == '__main__':
torch.set_num_threads(1)
tmp = np.random.random((2, 256, 256, 256))
tmp_torch = torch.tensor(tmp)
start = time()
_ = np.argmax(tmp, 0)
print(f'numpy:\t\t {time() - start}')
start = time()
_ = torch.argmax(tmp_torch, 0)
print(f'torch:\t\t {time() - start}')
tmp_torch = tmp_torch.to(torch.device('cuda'))
_ = torch.argmax(tmp_torch, 0)
start = time()
_ = torch.argmax(tmp_torch, 0)
print(f'torch (CUDA):\t {time() - start}')
tmp = np.random.random((2, 256, 256))
tmp_torch = torch.tensor(tmp)
start = time()
for _ in range(250):
_ = np.argmax(tmp, 0)
print(f'numpy:\t\t {time() - start}')
start = time()
for _ in range(250):
_ = torch.argmax(tmp_torch, 0)
print(f'torch:\t\t {time() - start}')
tmp_torch = tmp_torch.to(torch.device('cuda'))
_ = torch.argmax(tmp_torch, 0)
start = time()
for _ in range(250):
_ = torch.argmax(tmp_torch, 0)
print(f'torch (CUDA):\t {time() - start}')
```
> numpy: 0.21234750747680664
> torch: 1.553704023361206
> torch (CUDA): 2.7418136596679688e-05
> numpy: 0.17934107780456543
> torch: 1.4539289474487305
> torch (CUDA): 0.0014312267303466797
The numpy implementation is about 7x faster (while also using just one thread).
Best,
Fabian
### Versions
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 5800X3D 8-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 4548,8281
CPU min MHz: 2200,0000
BogoMIPS: 6800.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 96 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] fft-conv-pytorch==1.2.0
[pip3] numpy==2.0.2
[pip3] numpydoc==1.8.0
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] fft-conv-pytorch 1.2.0 pypi_0 pypi
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 2.0.2 pypi_0 pypi
[conda] numpydoc 1.8.0 pypi_0 pypi
[conda] pytorch 2.5.0 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.0 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.0 py312_cu124 pytorch
cc @msaroufim @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | module: performance,module: cpu,triaged | low | Critical |
2,640,528,471 | ollama | ollama runner process has terminated: exit status 127 | ### What is the issue?
Ollama will report this error when running any model. The ollama-linux-amd64.tgz file is directly upgraded to version 0.3.14.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0 | bug,linux,needs more info | low | Critical |
2,640,548,333 | kubernetes | kubelet evented panic when use generic pleg relisting | ### What happened?
kubelet evented panic when use generic pleg relisting,the log is like:
```
Oct 21 06:03:47 kubelet[1530]: E1021 06:03:47.241522 1530 remote_runtime.go:550] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
Oct 21 06:03:48 kubelet[1530]: E1021 06:03:48.844680 1530 remote_runtime.go:550] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
Oct 21 06:03:48 kubelet[1530]: E1021 06:03:48.844719 1530 resource_metrics.go:118] "Error getting summary for resourceMetric prometheus endpoint" err="failed to list pod stats: failed to list all containers: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Oct 21 06:03:51 kubelet[1530]: E1021 06:03:51.478833 1530 evented.go:356] "Evented PLEG: Get cache" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podID=3c55a6f7-4909-4cc8-a48c-c534a6b94304
Oct 21 06:03:53 kubelet[1530]: E1021 06:03:51.580766 1530 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
Oct 21 06:03:53 kubelet[1530]: goroutine 369 [running]:
Oct 21 06:03:53 kubelet[1530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x43ea9e0, 0x77e53c0)
Oct 21 06:03:53 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
Oct 21 06:03:53 kubelet[1530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
Oct 21 06:03:53 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
Oct 21 06:03:53 kubelet[1530]: panic(0x43ea9e0, 0x77e53c0)
Oct 21 06:03:53 kubelet[1530]: /usr/local/go/src/runtime/panic.go:965 +0x1b9
Oct 21 06:03:54 kubelet[1530]: k8s.io/kubernetes/pkg/kubelet/pleg.(*EventedPLEG).updateRunningPodMetric(0xc000a18a00, 0xc0011af950)
Oct 21 06:03:54 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go:359 +0x7d
Oct 21 06:03:54 kubelet[1530]: k8s.io/kubernetes/pkg/kubelet/pleg.(*EventedPLEG).processCRIEvents(0xc000a18a00, 0xc0010ba840)
Oct 21 06:03:54 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go:251 +0x3b2
Oct 21 06:03:54 kubelet[1530]: k8s.io/kubernetes/pkg/kubelet/pleg.(*EventedPLEG).watchEventsChannel(0xc000a18a00)
Oct 21 06:03:54 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go:206 +0xec
Oct 21 06:03:54 kubelet[1530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00181e0a0)
Oct 21 06:03:54 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
Oct 21 06:03:54 kubelet[1530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00181e0a0, 0x53f5b20, 0xc0014981e0, 0x1, 0xc00212a120)
Oct 21 06:03:54 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
Oct 21 06:03:54 kubelet[1530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00181e0a0, 0x0, 0x0, 0x1, 0xc00212a120)
Oct 21 06:03:54 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
Oct 21 06:03:54 kubelet[1530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00181e0a0, 0x0, 0xc00212a120)
Oct 21 06:03:54 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
Oct 21 06:03:54 kubelet[1530]: created by k8s.io/kubernetes/pkg/kubelet/pleg.(*EventedPLEG).Start
Oct 21 06:03:54 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go:124 +0x148
Oct 21 06:04:48 kubelet[1530]: panic: send on closed channel
Oct 21 06:04:48 kubelet[1530]: goroutine 403 [running]:
Oct 21 06:04:49 kubelet[1530]: k8s.io/kubernetes/pkg/kubelet/cri/remote.(*remoteRuntimeService).GetContainerEvents(0xc00081ea20, 0xc0010ba840, 0x0, 0x1)
Oct 21 06:04:49 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:1230 +0x198
Oct 21 06:04:49 kubelet[1530]: k8s.io/kubernetes/pkg/kubelet/pleg.(*EventedPLEG).watchEventsChannel.func1(0xc000a18a00, 0xc0010ba840)
Oct 21 06:04:49 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go:195 +0x69
Oct 21 06:04:49 kubelet[1530]: created by k8s.io/kubernetes/pkg/kubelet/pleg.(*EventedPLEG).watchEventsChannel
Oct 21 06:04:49 kubelet[1530]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go:180 +0xac
Oct 21 06:04:48 systemd[1]: kubelet.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 21 06:04:48 systemd[1]: Unit kubelet.service entered failed state.
Oct 21 06:04:48 systemd[1]: kubelet.service failed.
```
### What did you expect to happen?
kubelet not panic
### How can we reproduce it (as minimally and precisely as possible)?
It's hard to reproduce the rpc call "context deadline exceeded" timeout situation, So I added some codes in kubelet to construct this.
1. add the patch for kubelet
```
pkg/kubelet/cri/remote/remote_runtime.go | 8 ++++++++
pkg/kubelet/kuberuntime/kuberuntime_manager.go | 5 +++++
2 files changed, 13 insertions(+)
diff --git a/pkg/kubelet/cri/remote/remote_runtime.go b/pkg/kubelet/cri/remote/remote_runtime.go
index 424824467b5..d48d6cffb7a 100644
--- a/pkg/kubelet/cri/remote/remote_runtime.go
+++ b/pkg/kubelet/cri/remote/remote_runtime.go
@@ -1206,6 +1206,7 @@ func (r *remoteRuntimeService) CheckpointContainer(options *runtimeapi.Checkpoin
return nil
}
+var eventErrored bool = false
func (r *remoteRuntimeService) GetContainerEvents(containerEventsCh chan *runtimeapi.ContainerEventResponse) error {
containerEventsStreamingClient, err := r.runtimeClient.GetContainerEvents(context.Background(), &runtimeapi.GetEventsRequest{})
if err != nil {
@@ -1227,6 +1228,13 @@ func (r *remoteRuntimeService) GetContainerEvents(containerEventsCh chan *runtim
return err
}
if resp != nil {
+ if !eventErrored {
+ if resp.PodSandboxStatus != nil && resp.PodSandboxStatus.Metadata != nil &&
+ resp.PodSandboxStatus.Metadata.Name == "centos7-pod" {
+ eventErrored = true
+ return errors.New("test error")
+ }
+ }
containerEventsCh <- resp
klog.V(4).InfoS("container event received", "resp", resp)
}
diff --git a/pkg/kubelet/kuberuntime/kuberuntime_manager.go b/pkg/kubelet/kuberuntime/kuberuntime_manager.go
index 1c5e545fa9d..f10263c43ce 100644
--- a/pkg/kubelet/kuberuntime/kuberuntime_manager.go
+++ b/pkg/kubelet/kuberuntime/kuberuntime_manager.go
@@ -1040,6 +1040,7 @@ func (m *kubeGenericRuntimeManager) GeneratePodStatus(event *runtimeapi.Containe
}, nil
}
+var errorGetPod bool = false
// GetPodStatus retrieves the status of the pod, including the
// information of all containers in the pod that are visible in Runtime.
func (m *kubeGenericRuntimeManager) GetPodStatus(uid kubetypes.UID, name, namespace string) (*kubecontainer.PodStatus, error) {
@@ -1073,6 +1074,10 @@ func (m *kubeGenericRuntimeManager) GetPodStatus(uid kubetypes.UID, name, namesp
klog.V(4).InfoS("getSandboxIDByPodUID got sandbox IDs for pod", "podSandboxID", podSandboxIDs, "pod", klog.KObj(pod))
+ if pod.Name == "centos7-pod" && errorGetPod == false {
+ errorGetPod = true
+ return nil, errors.New("test2 error")
+ }
sandboxStatuses := []*runtimeapi.PodSandboxStatus{}
containerStatuses := []*kubecontainer.Status{}
timestamp := time.Now()
--
2.39.3 (Apple Git-146)
```
2. create a pod, then the kubelet panic
```
```
### Anything else we need to know?
_No response_
### Kubernetes version
kubelet 1.21 with latest evented pleg.
### Cloud provider
None
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
Centos7
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
containerd 1.7
</details>
| kind/bug,sig/node,triage/needs-information,needs-triage | low | Critical |
2,640,556,716 | ollama | Unable to load images from network fileshares on Windows | ### What is the issue?
Using Ollama on Windows via the terminal, if you ask a question and reference an image on a network fileshare, it will give a response about it not been able to see the photo. If you copy the image locally and then reference the local image, it has no problem with analysing the image.
Paths starting with \\ will not load the image.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.0 | bug,windows | low | Minor |
2,640,557,322 | tailwindcss | Invalid css selector generated | ### Discussed in https://github.com/tailwindlabs/tailwindcss/discussions/14901
<div type='discussions-op-text'>
<sup>Originally posted by **kdy1** November 7, 2024</sup>
**What version of Tailwind CSS are you using?**
v3.4.14
**What build tool (or framework if it abstracts the build tool) are you using?**
Next.js + turbopack, but reproducible on playground.
**What version of Node.js are you using?**
Playground
**What browser are you using?**
Chrome
**What operating system are you using?**
macOS
**Reproduction URL**
https://play.tailwindcss.com/obgZakKyiO?file=css
**Describe your issue**
Generated CSS file is
```css
p.my-4 {
margin: 0px;
}
.\[\&_\:is\(p\)\]\:my-4 :is(p)p {
margin: 0px;
}
```
where `:is(p)p` is invalid</div>
---
Originally opened here, but converted it too soon: https://github.com/tailwindlabs/tailwindcss/issues/14900 | v3 | low | Minor |
2,640,569,907 | three.js | CSM Improvement: Calculate the best fit frustum for shadow map | ### Description
So far, we use `OffsetCSMShadowNode` to get CSM effect in a scene. We can pass some parameters to this node when constructing. In the file https://github.com/mrdoob/three.js/blob/dev/examples/jsm/csm/CSMShadowNode.js#L369 `OffsetCSMShadowNode` update transfrom of each internal light in each frame. but the frustum value(`.left`, `.top`, `far`...) of shadow map camera doesn't update, which clones from the main light at the begining.
### Solution
Because the frustum of each shadow camera may be bigger or smaller than the view frustum, It may cause a waste of shadow map or lost some shadow area. I think we should calculate the best fit frustum for shadow map automatically in `three.js`.
I've tested some way to get it. some code like this: https://github.com/ligaofeng0901/three.js/commit/2a00ba9ca5e7f327927afbdd360a27c14c7ef3f6
It can get a better tight box for shadow camera, get better quality of shadow.
<img width="729" alt="490bc465ca3c3e5af64ab1af94ab1997" src="https://github.com/user-attachments/assets/2e0b3308-2b03-41ca-a0ac-f00bbe810e7c">
### Alternatives
But there are more sitiuation should be considered, the box should be a little bigger than the tight one, but the margin of each box should not be a constant. Also, if `.fade` is true, some blank area will appear.
### Additional context
_No response_ | Enhancement | low | Minor |
2,640,573,393 | angular | feature: measure rendering time & notify users when they are rendering too much [using rendering budget] | Idea: Check when the last requestAnimationFrame has no more work to do and if it's more than the rendering budget just warn dev. | area: core | low | Minor |
2,640,583,199 | TypeScript | Type Inference Fails with Generic Types | ### 🔎 Search Terms
error recursive generic types
### 🕗 Version & Regression Information
- This changed between versions v5.7.0-beta and v4.1.5
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.3#code/C4TwDgpgBAQghgZwgSWBAtgHgCpQLxRwB2IAfPlAN5QCWAJgFxS4C+AUGzUWgE4BmcAMbQAykThgEACwD2wHFAgAPNEToJYiFGizJyBeElQYANFGQViZKmyi1GUBMB5cA5idtRgNdBCZEAV3QAIwgeDztBGQAbaIhBbxkiADk4XyYnFyJ3TxodBCZsAG0AXQ92NiiiJ0cIOISZHiYAMWi4YABFALCQTDEJaTlMQ20MUn0qKCjY+MSUtL8oAHJEOj4lqHYAei2oAD09th2oZJlFHh5GqEBQcg4+AKIEmiSvCCdMY3RscGhlVXVNEYdB86N9IKQzMhQT9LCRSAAKACUNjsoEg5h0AGkICANAQACJyU7AdrPIjY3F9cSSWTyT5giCQ6Hg0ieY4HNm7ABE9C5UAAPlAud5fHzBVzpvU5qlRQKhXkMAgxVAAAYK9AIAB0ABJKJk3CwVZ5PGjoABhGJSskyiAMiitdqqTAAcQg8n6NKG9J+TIZEOWktm1oWS3GnP2hzsxy5+uyXI4kSSNVcECIYRoghEdVmjRabU63R4vQ9gzpOgZvp+4wo1EDTySNqYKwQaw27Cjuw5HagAFELldADLkhRhS1rlqDDYWGWcbgA3JsNjQNEQ5IQEAgaK5xME4l4zqblg6Cz0qQNaR9yz7zMyIOMlpq2BVjgBVbzRPIgLww4BnQRSeIANa0HwhBfuiS6EEQhAXHAn5XDIwQAFazA+B7IAgADySGzDgEy4H8qYAghyEJFAAD8UDwvhKiERoACCMG9FY5AUQI0RIFATDON0yJMGxSAcMcABK8QBDwG4AG7QAeP5QCmaY8O00B0HIAC0K4knMUAATiCBsAehLAMSpJJBSCC4TWnhFJitBQTpIAyCBuAAGSUbGrhyoEIRhIiJQMJ4xSYiUig0Wo9GMZgXB8GEUDPqy3bICBeRLBoxDQYpIBmFwgjRAEdDQMEchSF4-62flSiQXQJXQNqUAAO40NEdCCHAPB0J4dgUSquqYoacrdZQvU6iq-U9SwOqUIZxlzGZmBxYa4aJbQwApZBUDEbMZg8KJ4nQFwsl5B1nHmJh2EJDg1klOQBFhV4PDdEdnWqmNI2CgNQ26lNcgmeSukXUFpALXYwPHe9fXHCMUxaEwABSAQ1NtwBiVBwClfZj5FPZjnMFArnwu5nlBKEPC+RwB4iGA77uiF-waO5Zj4nUPgKjwNO0Y4M7ZPongiGzt0DVFMUABIQHAdAsJ9TPoCzEuUILrPYHAjVA8DFFFCLYtmJq2sU1TOBK9EDNSyzpAlEdTBFCIZv6TCrrAAyMAgAACq1wDmdgZguzwbt8wC21i0k0Sfu5pTc3YXs+zdAJFPLUAa3QWva7HiuNWbIMUfHvsaFjTmPeRUB2w7zuu+76ui3QZS9ioinnSnhsc1krih-FIPHZnUcaEs2pLHnT3UbTUAMRlkVENFrNxb3qsF26RcR+Zz5mD21dCPIddmCHV0t63wP+BAUk8JPu-7+bzBk7bbo4J77TFR3DduGH0-2z8jtz5fUC63kmAu6jZj3qGrI23REeJ02A8JZ0giAUo+digAAZgqFDPkA-MXQTygMsnYIoTtbJQC+ppMks1QElDInmR0qYXQXw9lAJ2YZ2BAA
### 💻 Code
```ts
type BaseItem<T = any> = { id: T }
interface Snapshot<T extends BaseItem<I> = BaseItem, I = any> {
id: string,
time: number,
collectionName: string,
items: T[],
}
const selector: FlatQuery<Snapshot<BaseItem>> = { collectionName: 'asdf' }
// ^^
// No error ✅
function test<ItemType extends BaseItem<IdType>, IdType = any>() {
type ItemKeys = DotNotationKeys<Snapshot<ItemType, IdType>>
// ^^
// "id" | "time" | "collectionName" | "items" | `items.${string}`
type CollectionNameType = Flatten<Get<Snapshot<ItemType, IdType>, 'collectionName'>>
// ^^
// "string"
const genericSelector: FlatQuery<Snapshot<ItemType, IdType>> = { collectionName: 'asdf' }
// ^^
// Error ❌: Type '{ collectionName: string; }' is not assignable to type 'FlatQuery<Snapshot<ItemType, IdType>>'.
}
// Utility type to check if a type is an array or object.
type IsObject<T> = T extends object ? (T extends Array<any> ? false : true) : false
// Recursive type to generate dot-notation keys
type DotNotationKeys<T> = {
[K in keyof T & (string | number)]:
T[K] extends Array<infer U>
// If it's an array, include both the index and the $ wildcard
? `${K}` | `${K}.$` | `${K}.${DotNotationKeys<U>}`
// If it's an object, recurse into it
: IsObject<T[K]> extends true
? `${K}` | `${K}.${DotNotationKeys<T[K]>}`
: `${K}` // Base case: Just return the key
}[keyof T & (string | number)]
type Split<S extends string, Delimiter extends string> =
S extends `${infer Head}${Delimiter}${infer Tail}`
? [Head, ...Split<Tail, Delimiter>]
: [S]
type GetTypeByParts<T, Parts extends readonly string[]> =
Parts extends [infer Head, ...infer Tail]
? Head extends keyof T
? GetTypeByParts<T[Head], Extract<Tail, string[]>>
: Head extends '$'
? T extends Array<infer U>
? GetTypeByParts<U, Extract<Tail, string[]>>
: never
: never
: T
type Get<T, Path extends string> =
GetTypeByParts<T, Split<Path, '.'>>
type Flatten<T> = T extends any[] ? T[0] : T
type FlatQuery<T> = {
[P in DotNotationKeys<T>]?: Flatten<Get<T, P>>
}
```
### 🙁 Actual behavior
Typescript isn't able to assign `{ collectionName: 'asdf' }` when using generic types:
```ts
function test<ItemType extends BaseItem<IdType>, IdType = any>() {
const selector: FlatQuery<Snapshot<ItemType, IdType>> = { collectionName: 'asdf' }
// ^^
// Type '{ collectionName: string; }' is not assignable to type 'FlatQuery<Snapshot<ItemType, IdType>>'.
}
```
Without generic types it work seamlessly:
```ts
const selector: FlatQuery<Snapshot<BaseItem>> = { collectionName: 'asdf' }
```
### 🙂 Expected behavior
`{ collectionName: 'asdf' }` should be successfully assignable to `FlatQuery<Snapshot<ItemType, IdType>>` using generic types.
### Additional information about the issue
Related issue: https://github.com/maxnowack/signaldb/pull/1030 | Needs More Info | low | Critical |
2,640,613,928 | PowerToys | Github Style Markdown Rendering Support: Syntax Highlight, Mermaid Graphs, Latex | ### Description of the new feature / enhancement
Would be great if any 1 of the following could be implemented
- Mermaid Graphs
- Latex
- Syntax Highlighting within Markdown

### Scenario when this would be used?
It can be used for previewing presentation markdown files quickly as we scroll
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,640,696,977 | next.js | Incremental Static Regeneration (ISR) Not Functioning as Expected in Next.js 15 | ### Link to the code that reproduces this issue
https://github.com/samstr/isr-demo-nextjs15
### To Reproduce
NextJS 14 Demo: https://github.com/samstr/isr-demo-nextjs14
NextJS 15 Demo: https://github.com/samstr/isr-demo-nextjs15
1. Clone demo repo (isr-demo-nextjs14)
2. npm install
3. npm run build
4. Go to http://localhost:3000/ and click the example links
### Current vs. Expected behavior
The expected behavior is for pages that implement ISR should be cached for the time specified using the revalidate const.
In NextJS14 this works, but in NextJS 15 I can't seem to get the same behavior.
The `npm run build` output treats the route differently.
**NextJS 14**

**NextJS 15**

I understand that NextJS 15 introduced a new 'uncached by default' concept for route handlers and fetch requests etc, but I didn't think that was related to server rendered pages that are using `revalidate`.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6020
Available memory (MB): 98304
Available CPU cores: 12
Binaries:
Node: 18.18.0
npm: 9.8.1
Yarn: N/A
pnpm: 7.22.0
Relevant Packages:
next: 15.0.2 // Latest available version is detected (15.0.2).
eslint-config-next: 15.0.2
react: 19.0.0-rc-02c0e824-20241028
react-dom: 19.0.0-rc-02c0e824-20241028
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed)
### Additional context
_No response_ | bug | low | Major |
2,640,763,465 | flutter | Navigator.canPop vs ModalRoute canPop different behavior | ### Steps to reproduce
Suppose following routes structure using GoRouter
```code
-- Home
--- CreateProfile
```
On initial launch of app we launch CreateProfile using context.go() to CreateProfile.
On Home page we have custom code to display back button (in some cases, users can be pushed to Home from different parts of App, omitted in example).
When users goes back with AppBar's back button to Home, we decides if show back button with
```dart
final canPop = context.canPop()
if(canPop) ... show button widget
```
However, canPop returns true even when there is only Home page on navigator's stack.
If I do Hot-Reload on Home Page, **canPop returns false.**
Moreover
```dart
final canPop = Navigator.of(context).canPop();
final goRouterCanPop = context.canPop();
```
both returns `true` after POP from CreateProfile.
BUT if we ask for ` final modalRouteCanPop = ModalRoute.of(context)?.canPop;` , that is
```dart
final canPop = Navigator.of(context).canPop();
final goRouterCanPop = context.canPop();
final modalRouteCanPop = ModalRoute.of(context)?.canPop;
```
suddenly all calls returns `false` after pooping from CreateProfile.
Why ModalRoute.of(context)?.canPop alters behaviors of canPop() methods? It seems that ModalRoute is InheritedWidget and it rebuilds HomePage.
Why canPop() returns true for HomePage when there is CreateProfile on the navigator stack in first place?
GoRouter version: go_router: ^14.1.3
### Expected results
CanPop() should return false for first page on navigator stack
### Actual results
CanPop returns true for first page navigator stack, after hot reload it returns false.
### Code sample
<details open><summary>Code sample</summary>
```dart
final canPop = Navigator.of(context).canPop();
final goRouterCanPop = context.canPop();
final modalRouteCanPop = ModalRoute.of(context)?.canPop;
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[x] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4317], locale en-US)
[x] Windows Version (Installed version of Windows is version 10 or higher)
[x] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[x] Chrome - develop for the web
[x] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.8.3)
[x] Android Studio (version 2021.3)
[x] Android Studio (version 2022.3)
[x] VS Code (version 1.94.2)
[x] Connected device (4 available)
[x] Network resources
No issues found!
```
</details>
| framework,f: routes,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27 | low | Minor |
2,640,846,788 | flutter | The `event.buttons` of `Listener` `onPointerUp` callback returns 0 regardless of the mouse button pressed | ### Steps to reproduce
Just click on amber rectanble and see the results. The `onPointerDown` shows 1 or 2, `onPointerUp` shows 0.
### Expected results
1 on left mouse pointer up
2 on right mouse pointer up
### Actual results
0 on left mouse pointer up
0 on right mouse pointer up
### Code sample
```dart
import 'package:flutter/material.dart';
void main() => runApp(const MaterialApp(home: Foo()));
class Foo extends StatelessWidget {
const Foo({super.key});
@override
Widget build(BuildContext context) {
String message = '';
return Material(
child: Center(
child: Container(
color: Colors.amber,
width: 200,
height: 200,
child: StatefulBuilder(
builder: (context, setState) {
return Listener(
onPointerDown: (event) {
setState(() {
message = 'down ${event.buttons}';
});
},
onPointerUp: (event) {
setState(() {
message = 'up ${event.buttons}';
});
},
child: Text(message),
);
},
),
),
),
);
}
}
```
</details>
### Flutter Doctor output
Windows 10
Google Chrome 130
Flutter 3.24.4 Stable
| c: new feature,framework,a: mouse,has reproducible steps,P3,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27 | low | Major |
2,640,852,144 | PowerToys | Fancy zones, limit activation to screens with the same resolution, limit templates to a max resolution, laptop changing between high res screens and low res screens | ### Description of the new feature / enhancement
Right now when you are working with a laptop in a 4k screen, fancy zones is very useful as you can divide the screen as desired. The problem is that when you change between different resolutions, the template for the 4k doesn't fit well for 1080p screens. It would be nice to be able to set a maximum resolution where a template is applied, so when you change between screens you can apply automatically another template (or not apply any).
### Scenario when this would be used?
Laptops changing often between monitors with different resolutions, home / office.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,640,881,405 | node | Support `bufferSize` option with recursive mode in `fs.opendir` | Related to: #48820 and #55744
After the `recursive` option was added to `readdir` and `opendir`, it was noted that when specifying `bufferSize` alongside `recursive: true`, the result of `opendir` was incorrect. This is fixed in #55744 . However, the _fix_ is a naive solution, and doesn't properly respect the `bufferSize` option. Furthermore, it could result in a blocked event loop. This should be fixed.
I recommend reading the discussion in #48820 for more information. This should only involve changes to the `Dir` class in `lib/internal/fs/dir.js`. | confirmed-bug,fs,good first issue | low | Major |
2,640,928,280 | rust | It is unclear how to reproduce "Testing stage2 error-index (x86_64-unknown-linux-gnu)" | In [this](https://github.com/rust-lang/rust/actions/runs/11721757438/job/32649886716) CI run, I got the following error:
```
Testing stage2 error-index (x86_64-unknown-linux-gnu)
STDOUT:
running 1106 tests
[...]
failures:
/checkout/obj/build/x86_64-unknown-linux-gnu/test/error-index.md - Rust_Compiler_Error_Index::E0094 (line 1995)
/checkout/obj/build/x86_64-unknown-linux-gnu/test/error-index.md - Rust_Compiler_Error_Index::E0211::_::Note__this_error_code_is_no_longer_emitted_by_the_compiler_ (line 3999)
test result: FAILED. 1043 passed; 2 failed; 61 ignored; 0 measured; 0 filtered out; finished in 8.56s
Command CFG_RELEASE_CHANNEL="nightly" RUSTC_BOOTSTRAP="1" RUSTC_STAGE="2" RUSTC_SYSROOT="/checkout/obj/build/x86_64-unknown-linux-gnu/stage2" RUSTDOC_LIBDIR="/checkout/obj/build/x86_64-unknown-linux-gnu/stage2/lib" RUSTDOC_REAL="/checkout/obj/build/x86_64-unknown-linux-gnu/stage2/bin/rustdoc" RUST_TEST_THREADS="16" "/checkout/obj/build/bootstrap/debug/rustdoc" "-Wrustdoc::invalid_codeblock_attributes" "-Dwarnings" "-Znormalize-docs" "-Z" "unstable-options" "--test" "/checkout/obj/build/x86_64-unknown-linux-gnu/test/error-index.md" "--test-args" "" (failure_mode=DelayFail) has failed. Rerun with -v to see more details.
Build completed unsuccessfully in 0:41:18
```
I have no idea how to reproduce this locally. "Testing stage2 error-index" sounds like it should be `./x test error-index`, but that does not work. So right now I am making guesses and pushing that to the PR, leading to a 40min round-trip time for an edit-compile cycle...
In general, when an `x.py` job fails, it'd be really nice if it could, at the end of the error log, give some idea of how this can be reproduced -- for instance, it can also be non-trivial to find out which target the failing test ran on.
Cc @rust-lang/bootstrap | A-diagnostics,T-bootstrap,C-bug,A-contributor-roadblock | low | Critical |
2,640,935,358 | ollama | llama runner process has terminated: error loading model: unable to allocate backend buffer when AMD iGPU vram allocation larger than 8GB | ### What is the issue?
After setting iGPU allocation to 16GB (out of 32GB) some models crash when loaded, while other mange.
```
ollama run llama3.2
Error: llama runner process has terminated: cudaMalloc failed: out of memory
llama_kv_cache_init: failed to allocate buffer for kv cache
llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache
```
```
ollama run llama3.2:3b-instruct-q6_K
Error: llama runner process has terminated: error loading model: unable to allocate backend buffer
llama_load_model_from_file: exception loading model
```
```
ollama run smollm2:1.7b-instruct-q6_K
>>> Send a message (/? for help)
```
With a smaller ram/vram split, like 4G, ollama loads models into vram fully, or gpu and cpu.
```
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=9.0.0"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_KEEP_ALIVE=24h"
```
```
rocminfo
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
Runtime Ext Version: 1.6
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 9 5900HX with Radeon Graphics
Uuid: CPU-XX
Marketing Name: AMD Ryzen 9 5900HX with Radeon Graphics
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 4680
BDFID: 0
Internal Node ID: 0
Compute Unit: 16
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Memory Properties:
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 16285796(0xf88064) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 16285796(0xf88064) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 16285796(0xf88064) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx90c
Uuid: GPU-XX
Marketing Name: AMD Radeon Graphics
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 1024(0x400) KB
Chip ID: 5688(0x1638)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2100
BDFID: 1024
Internal Node ID: 1
Compute Unit: 8
SIMDs per CU: 4
Shader Engines: 1
Shader Arrs. per Eng.: 1
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Memory Properties: APU
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 64(0x40)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 40(0x28)
Max Work-item Per CU: 2560(0xa00)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 472
SDMA engine uCode:: 40
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 8142896(0x7c4030) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 8142896(0x7c4030) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Recommended Granule:0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx90c:xnack+
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
```
```
journalctl -u ollama.service -n 100 --no-pager
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 1: general.type str = model
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 3: general.finetune str = Instruct
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 4: general.basename str = Llama-3.2
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 5: general.size_label str = 3B
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 8: llama.block_count u32 = 28
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 9: llama.context_length u32 = 131072
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 10: llama.embedding_length u32 = 3072
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 24
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 18: general.file_type u32 = 18
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 29: general.quantization_version u32 = 2
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - type f32: 58 tensors
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - type q6_K: 197 tensors
Nov 07 13:49:56 slimb ollama[1817]: time=2024-11-07T13:49:56.473+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Nov 07 13:49:56 slimb ollama[1817]: llm_load_vocab: special tokens cache size = 256
Nov 07 13:49:56 slimb ollama[1817]: llm_load_vocab: token to piece cache size = 0.7999 MB
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: format = GGUF V3 (latest)
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: arch = llama
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: vocab type = BPE
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_vocab = 128256
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_merges = 280147
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: vocab_only = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ctx_train = 131072
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd = 3072
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_layer = 28
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_head = 24
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_head_kv = 8
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_rot = 128
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_swa = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_head_k = 128
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_head_v = 128
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_gqa = 3
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_k_gqa = 1024
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_v_gqa = 1024
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_norm_eps = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_logit_scale = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ff = 8192
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_expert = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_expert_used = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: causal attn = 1
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: pooling type = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope type = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope scaling = linear
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: freq_base_train = 500000.0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: freq_scale_train = 1
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ctx_orig_yarn = 131072
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope_finetuned = unknown
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_conv = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_inner = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_state = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_dt_rank = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model type = 3B
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model ftype = Q6_K
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model params = 3.21 B
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model size = 2.45 GiB (6.56 BPW)
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: general.name = Llama 3.2 3B Instruct
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: LF token = 128 'Ä'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: max token length = 256
Nov 07 13:49:56 slimb ollama[1817]: /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: found 1 ROCm devices:
Nov 07 13:49:57 slimb ollama[1817]: Device 0: AMD Radeon Graphics, compute capability 9.0, VMM: no
Nov 07 13:49:57 slimb ollama[1817]: llm_load_tensors: ggml ctx size = 0.24 MiB
Nov 07 13:49:57 slimb ollama[1817]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2513.91 MiB on device 0: cudaMalloc failed: out of memory
Nov 07 13:49:57 slimb ollama[1817]: llama_model_load: error loading model: unable to allocate backend buffer
Nov 07 13:49:57 slimb ollama[1817]: llama_load_model_from_file: exception loading model
Nov 07 13:49:57 slimb ollama[1817]: terminate called after throwing an instance of 'std::runtime_error'
Nov 07 13:49:57 slimb ollama[1817]: what(): unable to allocate backend buffer
Nov 07 13:49:57 slimb ollama[1817]: time=2024-11-07T13:49:57.678+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
Nov 07 13:49:59 slimb ollama[1817]: time=2024-11-07T13:49:59.282+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate backend buffer\nllama_load_model_from_file: exception loading model"
Nov 07 13:49:59 slimb ollama[1817]: [GIN] 2024/11/07 - 13:49:59 | 500 | 3.106339582s | 127.0.0.1 | POST "/api/generate"
```
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.14 | bug,linux,amd | low | Critical |
2,640,937,139 | create-react-app | npx create-react-app is not working? why? | For some reason when I tried to start a new project and run the npx command, i had met this issue. I tried to google this reason but no avail. It says issues with git so I tried to log in on terminal but it is still giving me the same error. Any idea why this would happen? Thanks.
======================================================================
root@tabulaRasa:/mnt/c/Users/tabul/OneDrive/Desktop/DTC web/temp# npx create-react-app your-project
Creating a new React app in C:\Users\tabul\OneDrive\Desktop\DTC web\temp\your-project.
Installing packages. This might take a couple of minutes.
Installing react, react-dom, and react-scripts with cra-template...
added 1312 packages in 13s
259 packages are looking for funding
run `npm fund` for details
Git repo not initialized Error: Command failed: git --version
at genericNodeError (node:internal/errors:983:15)
at wrappedFn (node:internal/errors:537:14)
at checkExecSyncError (node:child_process:888:11)
at execSync (node:child_process:960:15)
at tryGitInit (C:\Users\tabul\OneDrive\Desktop\DTC web\temp\your-project\node_modules\react-scripts\scripts\init.js:46:5)
at module.exports (C:\Users\tabul\OneDrive\Desktop\DTC web\temp\your-project\node_modules\react-scripts\scripts\init.js:276:7)
at [eval]:3:14
at runScriptInThisContext (node:internal/vm:209:10)
at node:internal/process/execution:118:14
at [eval]-wrapper:6:24 {
status: 1,
signal: null,
output: [ null, null, null ],
pid: 9820,
stdout: null,
stderr: null
}
Installing template dependencies using npm...
Unknown command: "install$1$1"
Did you mean this?
npm install # Install a package
To see a list of supported npm commands, run:
npm help
`npm install --no-audit --save @testing-library/jest-dom@^5.14.1 @testing-library/react@^13.0.0 @testing-library/user-event@^13.2.1 web-vitals@^2.1.0` failed | needs triage | low | Critical |
2,640,952,398 | react | [DevTools Bug]: Element selector mode not working with coarse pointer | ### Website or app
https://react.dev/
### Repro steps
https://github.com/user-attachments/assets/159b1f24-dd54-4c8e-b726-1d37a131316e
Using the device emulation mode, switch to a mobile device and ensure the pointer is that used for touch devices (circular, rather than the regular fine pointer).
Toggle on the element selector mode.
Try to select an element on the page.
No overlay when you hover the coarse pointer on the emulated device.
### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
_No response_
### Error component stack (automated)
_No response_
### GitHub query string (automated)
_No response_ | Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,640,970,903 | transformers | Padding error when using Universal Assisted Generation with ASR pipeline | ### System Info
Can not solve by supplying padding arg to pipeline (not accepted).
@gante
transformers version: https://github.com/huggingface/transformers.git@refs/pull/34504/merge
Ubuntu
Python 3.10.15
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction

### Expected behavior
Should complete pipeline execution as normal | Core: Pipeline,bug,Audio | low | Critical |
2,640,994,390 | langchain | dict[str, pydantic_model] annotation not working with structured output | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from __future__ import annotations
from typing import Optional
from unittest import TestCase
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from pydantic import Field, BaseModel
load_dotenv()
class CustomTestExtractorPropertiesCreation(TestCase):
def test_with_structured_output_dict(self):
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(
default=None, description="How funny the joke is, from 1 to 10"
)
class MultipleJokes(BaseModel):
name_joke_pairs: dict[str, Joke] = Field(description="Dictionary of joke_name and joke pairs")
ChatOpenAI(name="gpt-4o-mini").with_structured_output(
MultipleJokes).invoke("Generate a number of name-joke pairs")
```
### Error Message and Stack Trace (if applicable)
Error
Traceback (most recent call last):
File "/home/filip/work/llm-bot-framework/llm-bot-framework/tests/integration/conversations/custom_test_extractor_properties_creation.py", line 25, in test_with_structured_output_dict
MultipleJokes).invoke("Generate a number of name-joke pairs")
File "/home/filip/work/llm-bot-framework/llm-bot-framework/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
File "/home/filip/work/llm-bot-framework/llm-bot-framework/.venv/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 193, in invoke
return self._call_with_config(
File "/home/filip/work/llm-bot-framework/llm-bot-framework/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1927, in _call_with_config
context.run(
File "/home/filip/work/llm-bot-framework/llm-bot-framework/.venv/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 396, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/home/filip/work/llm-bot-framework/llm-bot-framework/.venv/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 194, in <lambda>
lambda inner_input: self.parse_result(
File "/home/filip/work/llm-bot-framework/llm-bot-framework/.venv/lib/python3.10/site-packages/langchain_core/output_parsers/openai_tools.py", line 298, in parse_result
raise e
File "/home/filip/work/llm-bot-framework/llm-bot-framework/.venv/lib/python3.10/site-packages/langchain_core/output_parsers/openai_tools.py", line 293, in parse_result
pydantic_objects.append(name_dict[res["type"]](**res["args"]))
File "/home/filip/work/llm-bot-framework/llm-bot-framework/.venv/lib/python3.10/site-packages/pydantic/main.py", line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for MultipleJokes
name_joke_pairs
Field required [type=missing, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.9/v/missing
### Description
I am trying to create MultipleJokes Pydantic Model (example).
**Expected result:**
MultipleJokes object is created no error is returned
**Actual result:**
Missing field exception is raised on Pydantic validation
### System Info
System Information
------------------
> OS: Linux
> OS Version: #48-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 14:04:52 UTC 2024
> Python Version: 3.10.15 (main, Sep 7 2024, 18:35:38) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.7
> langsmith: 0.1.134
> langchain_anthropic: 0.2.3
> langchain_google_genai: 2.0.1
> langchain_groq: 0.2.0
> langchain_nvidia: Installed. No version info available.
> langchain_nvidia_ai_endpoints: 0.3.0
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
> langgraph: 0.2.38
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> anthropic: 0.36.0
> async-timeout: 4.0.3
> defusedxml: 0.7.1
> google-generativeai: 0.8.3
> groq: 0.11.0
> httpx: 0.26.0
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.1
> langgraph-sdk: 0.1.33
> numpy: 1.26.4
> openai: 1.51.2
> orjson: 3.10.7
> packaging: 24.1
> pillow: 10.4.0
> pydantic: 2.9.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,641,017,584 | go | x/crypto/cryptobyte: "x509: invalid key usage" error caused by strict asn1 bit string parsing | ### Go version
go version go1.23.2 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/martin/Library/Caches/go-build'
GOENV='/Users/martin/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE='git.superg.run'
GOMODCACHE='/Users/martin/go/pkg/mod'
GONOPROXY='git.superg.run/*'
GONOSUMDB='git.superg.run/*'
GOOS='darwin'
GOPATH='/Users/martin/go'
GOPRIVATE='git.superg.run/*'
GOPROXY='https://proxy.golang.org'
GOROOT='/opt/local/lib/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/opt/local/lib/go/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.2'
GODEBUG=''
GOTELEMETRY='off'
GOTELEMETRYDIR='/Users/martin/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='/usr/bin/clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/private/tmp/decode/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/sz/z082qh0j6qvd6m981xc1d3n40000gn/T/go-build3277253954=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I received from my bank a x509 certificate in the PEM format to be used to connect to one of their services.
I passed the certificate to the `openssl x509` command-line program to print a textual representation of the certificate: it worked.
Then I tried to parse the certificate in go using `x509.ParseCertificate` as shown in the following example program:
https://go.dev/play/p/jtIyu3C6fL1
### What did you see happen?
The go example program fails with this error:
```
panic: x509: invalid key usage
goroutine 1 [running]:
main.main()
/tmp/sandbox2217993318/prog.go:40 +0xf9
Program exited.
```
### What did you expect to see?
Because the `openssl` command succeeded, I expected the go program to work as well, and to print the parsed certificate.
Here is the output of the `openssl` command, it doesn't have any warning or error.
```
$ openssl x509 -in cert.pem -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
21:60:51:40:c8:89:7f:46:be:ea:e6:24:40:a8:c8:1a
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=FR, OU=SSLGROUPE, 0002 493455042, O=BPCE, CN=www.ebics.bpce.fr
Validity
Not Before: Apr 13 09:43:09 2021 GMT
Not After : Apr 12 09:43:09 2026 GMT
Subject: C=FR, OU=SSLGROUPE, 0002 493455042, O=BPCE, CN=www.ebics.bpce.fr
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:c3:e1:61:28:f7:1d:00:f5:3c:f9:b0:ac:16:50:
15:ab:24:e9:80:9f:19:92:ad:31:36:74:85:41:e0:
c9:74:28:06:3a:fd:b1:08:71:22:b2:39:f2:31:11:
38:a1:a8:a5:e7:23:e3:57:bf:0b:29:54:ad:39:d5:
6f:c5:0d:6e:ad:c5:37:cd:be:b4:12:ea:d7:81:e7:
f0:a3:7c:9e:a8:d6:15:77:c3:c7:0d:d6:d8:eb:53:
1b:f4:ce:5c:95:aa:a2:ef:65:e8:44:2d:e6:e8:1d:
ab:00:45:26:5b:36:28:95:9c:cf:28:be:12:1e:9a:
cd:85:bc:41:f3:9a:68:c3:16:6f:d1:9c:d0:83:cb:
c1:75:36:4a:f0:53:38:ea:8e:c3:d6:80:39:8d:83:
53:e7:8e:6c:36:8f:ff:d7:ee:e5:18:23:46:30:2e:
e1:77:e5:e8:c3:8b:66:49:6e:01:11:34:bf:ca:26:
89:79:2b:6a:95:46:ac:1a:cf:cf:7a:04:64:c1:95:
ad:4a:26:5b:59:61:b1:e5:23:64:cf:c2:24:bf:7d:
33:68:e0:a0:1d:7f:74:cd:ce:03:7e:64:cf:c9:a6:
59:07:be:34:c2:39:60:6e:8b:62:e2:22:b2:ec:5f:
ff:6d:d1:12:1b:c9:8d:7d:3a:95:a0:d3:b4:64:bf:
69:fb
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
1C:EE:D7:19:36:7C:FE:C1:FB:5E:21:41:3B:A6:8F:41:91:A4:39:84
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
4a:67:f4:5e:28:75:0b:e3:31:8f:19:f3:2f:94:54:64:c0:8a:
fd:27:db:f8:76:5e:96:16:51:9e:87:85:bd:b9:b7:d1:f1:43:
e7:5c:1f:c1:1d:6b:32:5c:d8:06:a0:fb:12:08:9a:0c:e1:ab:
e8:24:90:c3:e3:83:40:31:da:21:e2:f2:dc:f7:b1:95:c4:28:
37:24:92:ea:56:ec:b9:07:9d:ca:86:cc:a0:b7:40:c8:97:f1:
29:42:79:b5:d8:23:7b:aa:22:fd:0e:33:a6:58:c3:bf:fc:26:
35:89:e3:d6:96:da:4a:7b:b1:3e:8d:40:b5:f1:2e:8d:67:a6:
87:4e:f6:32:55:a6:13:4b:19:85:be:b3:e5:e0:ac:6a:06:27:
6a:c4:8d:a4:89:68:a0:83:3e:c1:77:56:93:29:b9:4f:a5:97:
0a:cb:f7:87:4e:e6:8c:f4:d6:99:56:e9:8d:b5:2c:a7:48:c5:
bd:07:2c:47:3e:3a:8d:bd:01:13:12:c6:10:ef:b7:ea:e3:c5:
c4:73:69:1d:36:d6:91:2f:bf:5f:e7:f3:ec:e6:48:1e:8d:e7:
13:9c:ef:c1:a3:de:95:22:19:14:ca:c9:9f:ff:de:ab:5b:b6:
c9:eb:c2:6e:63:80:7f:00:58:ae:7f:b6:ae:e0:1a:93:66:63:
92:d4:cd:77
```
I've tracked down the problem by comparing the execution of my go program in the delve debugger with the execution of the openssl in the lldb debugger.
The error in go happens in the asn1 parser, specifically in the `ReadASN1BitString` function, at this very specific line:
https://cs.opensource.google/go/x/crypto/+/71ed71b4faf97caafd1863fed003e9ac311f10ee:cryptobyte/asn1.go;l=561
The result of the expression `bytes[len(bytes)-1]&(1<<paddingBits-1)` is not equal to `0`, so the `ReadASN1BitString` function returns `false` (ie: an error occured).
(In my specific example, `paddingBits` is equal to `5` and bytes is equal to `[136]`)
My understanding is that the go authors decided that if the padded part of the bit string is not equal to zero, then the function should fail.
In openssl, the equivalent line is this one:
https://github.com/openssl/openssl/blob/e54526413d5ef7c665e25f552f2f01d4352bd33d/crypto/asn1/a_bitstr.c#L121
We see that the openssl library doesn't behave like the go library: instead of failing if the padded part of the bit string is not equal to zero, it sets the value of the padded part to zero.
(In my specific example, `136 & (0xff << 5) == 128`).
The rust authors seem to have made the same choice as openssl. See the comment that says `The "unused bits" are set to 0.` in this file:
https://docs.rs/der/latest/src/der/asn1/bit_string.rs.html#56
If my analysis is correct, I suggest to relax the ASN1 bit string parser by allowing non-zero padding and then setting it to zero like openssl and rust does.
| NeedsInvestigation | low | Critical |
2,641,057,264 | kubernetes | unavailable etcd leading to unexpected pod recreation | ### What happened?
Recently, an incident in one of our clusters led to the deletion of approximately ~70% of the pods running on our worker nodes. While investigating what happened, I found a series of etcd logs mentioning apply requests taking too long and a series of delete operations.
I tried to reproduce the scenario by simulating an issue with etcd (i. e. removing it from the kubelet reach inside the nodes) and this actually led to the recreation of unrelated pods (it's hard to find the sweet spot of pod deletion between a few pods and the cluster being totaly unreacheable)
### What did you expect to happen?
The cluster shouldn't start deleting resources but instead "lock" itself to new operations, AFAIK.
### How can we reproduce it (as minimally and precisely as possible)?
Simply remove the etcd manifest from the /etc/kubernetes/manifests file for a relatively short time (~5mins)
### Anything else we need to know?
Some etcd pod logs I used to conduct the investigation
2024-10-21T21:30:02.5567686822 stderr F_{"level": "warn", "ts": "2024-10-21T21:30:02.556527Z","caller" : "etcdserver/util.go:170", "msg":"apply request took too long", "took":"109.535008ms", "expected-duration":"100ms","prefix":"","request": "header: <ID:11426074845274859338 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn: <compare: <target:MOD key:\"/registry/ pods/kube-system/cilium-knn9k\" mod_revision:551332032 > success:<request_put:<key:\"/registry/pods/kube-system/cilium-knn9k\" value_size:19461 >> failure:<>>","response":"size:22"}
2024-10-21T21:30:02.657714974Z stderr F {"level": "info","ts": "2024-10-21T21:30:02.657359Z","caller" : "traceutil/trace.go:171", "msg":"trace[802572949] transaction", "detail":"{read_only:false; response_revision:603736321; number_of_response:1; }", "duration":"136.224602ms", "start": "2024-10-21T21:30:02.521053Z", "end":"2024-10-21T21:30:02.657278Z", "steps":["trace[80257 2949] 'process raft request' (duration: 134.140488ms)"], "step_count":1}
2024-10-21T21:30:02.6582696652 stderr F {"level": "info", "ts": "2024-10-21T21:30:02.658004Z","caller" : "traceutil/trace.go:171", "msg":"trace[1759431508] linearizableReadLoop", "detail":"{readStateIndex:614487588; appliedIndex:614487486; }", "duration":"153.792181ms","start": "2024-10-21T21:30:02.5041922", "end": "2024-10-21T21:30:02.657984Z", "steps":["trace[1759431508] ' read index received" (duration: 36.17462ms)","trace [1759431508] 'applied index is now lower than readState. Index' (duration: 117.615976ms)"], "step_count":2}
Start of generalized pod deletion (logs may be incomplete):
"msg":"apply request took too long", "took":"109.535008ms", "expected-duration":"100ms","prefix":"","request": "header: <ID:11426074845274859338 username sion:551332032 equest. :<key:\"/registry/pods/kube-system/cilium-knn9k\" value_size:19461 >> failure: <>>", "response":"size:22"}
1 trace.go:219] Trace [1248109794]: "Get" accept:application/vnd.kubernetes.protobuf, application/json, audit-id:1d03bc70-d072-43b8-97fc-3cbaf308b64e, client: 172.17.3.47, protocol:HTTP/2.0, resource:pods, scope: resource, url:/api/v1/namespaces/kube-system/pods/conntrack-adjuster-s5fkd, user-agent: kubelet/ v1.26.12 (linux/amd64) kubernetes/df63cd7, verb: GET (21-Oct-2024 21:30:02.734) (total time: 587ms): ration: 134.140488ms)"], "step_count":1}
2024-10-21T21:30:03.329117818Z stderr F Trace[1248109794]: ---"About to write a response" 463ms (21:30:03.198) g":"trace[1759431508] linearizableReadLoop", "detail":"{readStateIndex:614487588; appliedIndex:614487486; }", "duration":"153.792181ms","start": "202 2024-10-21T21:30:03.329124819Z stderr F Trace[1248109794]: [587.053209ms] [587.053209ms] END "trace[1759431508] 'applied index is now lower than readState.Index' (duration: 117.615976ms)"], "step_count":2}
2024-10-21T21:30:03.329130754Z stderr F 11021 21:30:03.321965 1 trace.go:219] Trace[1891213869]: "Delete" accept:application/vnd.kubernetes.protobuf, application/json, audit-id:20355a78-8d75-4e87-831d-07ba3156deab, client:172.17.3.47, protocol:HTTP/2.0, resource: pods, scope: resource, url:/api/v1/namespaces/apm/pods/apmXXXX, user-agent: kubelet/v1.26.12 (linux/amd64) kubernetes/df63cd7,verb: DELETE (21-Oct-2024 21:30:01.966) (total time: 1355ms):tion: 132.833274ms)"], "step_count":1}
2024-10-21T21:30:03.329136245Z stderr F Trace [1891213869]: "Object deleted from database" 1024ms (21:30:03.198) msg":"trace[1031673685] transaction","detail":"{read_only: false; number_of_response:1; response_revision:603736317;}", "duration":"157.996656ms","star 2024-10-21T21:30:03.329140862Z stderr F Trace[1891213869]: ---"Writing http response done" 123ms (21:30:03.321) duration: 148.088092ms)"], "step_count":1}
2024-10-21T21:30:03.329146014Z stderr F Trace[1891213869]: [1.355740157s] [1.355740157s] END ,"msg":"trace [1268363042] transaction", "detail":"{read_only:false; response_revision:603736318; number_of_response:1;}","duration":"157.490323ms", "st 2024-10-21T21:30:03.329151087Z stderr F 11021 21:30:03.322410 1 trace.go:219] Trace [1220059992]: "Delete" accept:*/*, audit-id:98ff45dc-7440-412e-b02d-411b772f4835, client: 127.0.0.1, protocol:HTTP/2.0, resource: pods, scope: resource, url:/api/v1/namespaces/XXXXXXXX/pods/XXXXXXXX, user-agent: node-fetch, verb: DELETE (21-Oct-2024 21:30:02.211) (total time: 1110ms): ,"msg":"trace[1611058272] transaction", "d '{read_only:false; response_revision:603736319; number_of_response:1; }", "duration":"156.926033ms" 2024-10-21T21:30:03.329156733Z stderr F Trace[1220059992]:---"Writing http response done" 123ms (21:30:03.322)
Trace [1412908394]: "Delete" accept:*/*, audit-id:a9c4e508-08c0-46bd-9e7a-7a6231e2f813, client: 127.0.0.1, protocol:HTTP/2.0, resource: pods, scope: resource, url:/api/v1/namespaces/XXX-XXX-XXX/pods/XX-XX-XXX-XX-XXXXXXX, user-agent: node-fetch, verb: DELETE (21-Oct-2024 145981] sion:603736322;
2024-10-21T21:30:03.329171633Z stderr F Trace[1412908394]: ---"Writing http response done" 122ms (21:30:03.322) 2024-10-21T21:30:03.329175524Z stderr F Trace[1412908394]: [1.000495185s] [1.000495185s] END se:1; response_revision:603736314; }", "duration":"160.792537ms","
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.28.13
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.2
WARNING: version difference between client (1.28) and server (1.30) exceeds the supported minor version skew of +/-1
```
</details>
### Cloud provider
<details>
ClusterAPI using Cluster API Provider Openstack
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ uname -a
Linux kmxdjklsd-control-plane-cdjmc 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
</details>
### Install tools
<details>
Kubeadm
</details>
### Container runtime (CRI) and version (if applicable)
<details>
Containerd v1.6.20 2806fc1057397dbaeefbea0e4e17bddfbd388f38
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
CNI: Cilium v1.15.3
</details>
| kind/support,sig/cluster-lifecycle,kind/feature,sig/architecture,needs-triage,sig/etcd | low | Critical |
2,641,059,894 | PowerToys | Power Toys loses settings after computer is turned off | ### Microsoft PowerToys version
.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
Power toys starts automatically with start-up of my Windows 10 computer. But often, not always, it reverts to the original four enabled modules instead of the 9 I use. I have backed up my settings and can easily restore them every time but it's an annoyance.
### ✔️ Expected Behavior
I expected tyhat the program would remember the modules I have enabled every time.
### ❌ Actual Behavior
When I open the program after the computer has been shut down, often, not always, it reverts to the original four enabled modules instead of the 9 I use. I have backed up my settings and can easily restore them every time
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Minor |
2,641,060,017 | terminal | Mulitple levels of tabs | ### Description of the new feature
instead of a single horizontal layer for tabs please allow multiple horizontal layers for tabs too
### Proposed technical implementation details
_No response_ | Issue-Feature,Area-UserInterface,Product-Terminal,External-Blocked-WinUI3 | low | Minor |
2,641,063,828 | ui | [bug]: Difficulties Encountered During SHADCN Installation on Next.js 15 | ### Describe the bug
Problem:

### Affected component/components
Installation
### How to reproduce
1. Support `@radix-ui/react-icons` peer dependencies to React 19
2. Documentation regarding next.js 15 installation
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
npx shadcn@latest init
✔ Preflight checks.
✔ Verifying framework. Found Next.js.
✔ Validating Tailwind CSS.
✔ Validating import alias.
√ Which style would you like to use? » Default
√ Which color would you like to use as the base color? » Gray
√ Would you like to use CSS variables for theming? ... no / yes
✔ Writing components.json.
✔ Checking registry.
✔ Updating tailwind.config.js
✔ Updating app\globals.css
Installing dependencies.
It looks like you are using React 19.
Some packages may fail to install due to peer dependency issues in npm (see https://ui.shadcn.com/react-19).
√ How would you like to proceed? » Use --force
⠴ Installing dependencies.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Command failed with exit code 1: npm install --force tailwindcss-animate class-variance-authority lucide-react clsx tailwind-merge
Unknown command: "install$1$1"
Did you mean this?
npm install # Install a package
To see a list of supported npm commands, run:
npm help
```
### System Info
```bash
Windows 11, Git Bash, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,641,063,994 | pytorch | `torch.export.export` infers dynamic_shape as constant | ### 🐛 Describe the bug
Install most recent transformers and pytorch.
```
pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu124
pip install transformers
```
<details>
<summary>Repro</summary>
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch.export import Dim
def get_example_inputs(prompt: str, tokenizer: AutoTokenizer) -> torch.tensor:
"""
These arbitrary example inputs were observed by adding a debugger (`import pdb; pdb.set_trace()`)
to the line corresponding this https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L1197
inside local venv site-packages.
"""
example_inputs = tokenizer(
prompt,
return_tensors="pt"
).to("cuda")
seq_len = example_inputs["input_ids"].shape[1]
example_inputs["position_ids"] = torch.arange(seq_len).unsqueeze(0).to("cuda")
example_inputs["past_key_values"] = None
example_inputs["inputs_embeds"] = None
example_inputs["use_cache"] = False
example_inputs["output_attentions"] = False
example_inputs["output_hidden_states"] = False
example_inputs["return_dict"] = True
example_inputs["cache_position"] = torch.arange(seq_len).to("cuda")
return example_inputs
@torch.no_grad()
def load_model():
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-3B-Instruct", torch_dtype=torch.float16
).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_name)
return model, tokenizer
def aot_compile(path, model, **sample_kwargs):
"""
torch.export.export + torch._inductor.aoti_compile_and_package the model,
using given sample inputs.
"""
seq_len_dim = Dim("seq_len", min=1, max=128)
dynamic_shapes = {
"input_ids": {1: seq_len_dim},
"attention_mask": {1: seq_len_dim},
"position_ids": {1: seq_len_dim},
"past_key_values": None,
"inputs_embeds": None,
"use_cache": None,
"output_attentions": None,
"output_hidden_states": None,
"return_dict": None,
"cache_position": {1: Dim("cache_position", min=1, max=128)},
}
exported_program = torch.export.export(
model.model,
(),
sample_kwargs,
dynamic_shapes=dynamic_shapes
)
return torch._inductor.aoti_compile_and_package(
exported_program,
(),
sample_kwargs,
package_path=path,
)
def aot_load(path):
return torch._inductor.aoti_load_package(path)
if __name__ == "__main__":
model, tokenizer = load_model()
prompt = "What is a compiler?"
inputs1 = get_example_inputs(prompt, tokenizer)
compile_path = aot_compile('llama3.pt2', model, **inputs1)
print(f"AoT compiled path {compile_path}")
```
</details>
<details>
<summary>Error</summary>
```
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] Error while creating guard:
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] Name: ''
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] Source: shape_env
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] Create Function: SHAPE_ENV
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] Guard Types: None
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] Code List: None
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] Object Weakref: None
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] Guarded Class Weakref: None
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] Traceback (most recent call last):
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] File "/home/venv/lib/python3.9/site-packages/torch/_guards.py", line 298, in create
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] return self.create_fn(builder, self)
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/guards.py", line 1766, in SHAPE_ENV
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] code_parts, verbose_code_parts = output_graph.shape_env.produce_guards_verbose(
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] File "/home/venv/lib/python3.9/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5072, in produce_guards_verbose
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] raise ConstraintViolationError(
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (seq_len)! For more information, run with TORCH_LOGS="+dynamic".
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] - Not all values of seq_len = L['input_ids'].size()[1] in the specified range seq_len <= 128 are valid because seq_len was inferred to be a constant (6).
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] - Not all values of seq_len = L['attention_mask'].size()[1] in the specified range seq_len <= 128 are valid because seq_len was inferred to be a constant (6).
E1107 12:34:58.861135 2008591 torch/_guards.py:300] [0/0] - Not all values of seq_len = L['position_ids'].size()[1] in the specified range seq_len <= 128 are valid because seq_len was inferred to be a constant (6).
E1107 12:34:58.862866 2008591 torch/_guards.py:302] [0/0] Created at:
E1107 12:34:58.862866 2008591 torch/_guards.py:302] [0/0] File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 642, in transform
E1107 12:34:58.862866 2008591 torch/_guards.py:302] [0/0] tracer = InstructionTranslator(
E1107 12:34:58.862866 2008591 torch/_guards.py:302] [0/0] File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2807, in __init__
E1107 12:34:58.862866 2008591 torch/_guards.py:302] [0/0] output=OutputGraph(
E1107 12:34:58.862866 2008591 torch/_guards.py:302] [0/0] File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 319, in __init__
E1107 12:34:58.862866 2008591 torch/_guards.py:302] [0/0] self.init_ambient_guards()
E1107 12:34:58.862866 2008591 torch/_guards.py:302] [0/0] File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 468, in init_ambient_guards
E1107 12:34:58.862866 2008591 torch/_guards.py:302] [0/0] self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV))
Traceback (most recent call last):
File "/home/venv/lib/python3.9/site-packages/torch/export/_trace.py", line 660, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 1584, in inner
raise constraint_violation_error
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 1539, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "/home/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1423, in __call__
return self._torchdynamo_orig_callable(
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 549, in __call__
return _compile(
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 977, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 708, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/venv/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 840, in _compile_inner
check_fn = CheckFunctionManager(
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/guards.py", line 2183, in __init__
guard.create(builder)
File "/home/venv/lib/python3.9/site-packages/torch/_guards.py", line 298, in create
return self.create_fn(builder, self)
File "/home/venv/lib/python3.9/site-packages/torch/_dynamo/guards.py", line 1766, in SHAPE_ENV
code_parts, verbose_code_parts = output_graph.shape_env.produce_guards_verbose(
File "/home/venv/lib/python3.9/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5072, in produce_guards_verbose
raise ConstraintViolationError(
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (seq_len)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of seq_len = L['input_ids'].size()[1] in the specified range seq_len <= 128 are valid because seq_len was inferred to be a constant (6).
- Not all values of seq_len = L['attention_mask'].size()[1] in the specified range seq_len <= 128 are valid because seq_len was inferred to be a constant (6).
- Not all values of seq_len = L['position_ids'].size()[1] in the specified range seq_len <= 128 are valid because seq_len was inferred to be a constant (6).
Suggested fixes:
seq_len = 6
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workspaces/torch-serve-amd/examples/compile_tests/aot_llama/github_issue_code.py", line 96, in <module>
compile_path = aot_compile('llama3.pt2', model, **inputs1)
File "/workspaces/torch-serve-amd/examples/compile_tests/aot_llama/github_issue_code.py", line 56, in aot_compile
exported_program = torch.export.export(
File "/home/venv/lib/python3.9/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/home/venv/lib/python3.9/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/home/venv/lib/python3.9/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
File "/home/venv/lib/python3.9/site-packages/torch/export/exported_program.py", line 122, in wrapper
return fn(*args, **kwargs)
File "/home/venv/lib/python3.9/site-packages/torch/export/_trace.py", line 1957, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/venv/lib/python3.9/site-packages/torch/export/_trace.py", line 1251, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/home/venv/lib/python3.9/site-packages/torch/export/_trace.py", line 1279, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/venv/lib/python3.9/site-packages/torch/export/_trace.py", line 677, in _export_to_torch_ir
raise UserError(UserErrorType.CONSTRAINT_VIOLATION, str(e)) # noqa: B904
torch._dynamo.exc.UserError: Constraints violated (seq_len)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of seq_len = L['input_ids'].size()[1] in the specified range seq_len <= 128 are valid because seq_len was inferred to be a constant (6).
- Not all values of seq_len = L['attention_mask'].size()[1] in the specified range seq_len <= 128 are valid because seq_len was inferred to be a constant (6).
- Not all values of seq_len = L['position_ids'].size()[1] in the specified range seq_len <= 128 are valid because seq_len was inferred to be a constant (6).
Suggested fixes:
seq_len = 6
```
</details>
Error has very clear suggestion: the `seq_len`, that corresponds the second dimension of the input should be a constant number. That is the input sentence length. However the input sentence lengths of the Llama3.2 model differ.
This sounds like a perfect use case for [dynamic_shapes](https://pytorch.org/docs/stable/export.html#expressing-dynamism) of `torch.export.export`, but it does not work. What is the way to make compiled model accept inputs of various length?
### Versions
<details>
<summary>collect_env.py output</summary>
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241106+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-121-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.120
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-7640X CPU @ 4.00GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 9
CPU max MHz: 4200.0000
CPU min MHz: 800.0000
BogoMIPS: 7999.96
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241106+cu124
[conda] Could not collect
```
</details>
cc @ezyang @chauhang @penguinwu @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,module: dynamic shapes,oncall: export | low | Critical |
2,641,065,010 | go | cmd/internal/testdir: Test/typeparam/chans.go failures | ```
#!watchflakes
default <- pkg == "cmd/internal/testdir" && test == "Test/typeparam/chans.go"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731927694231348017)):
=== RUN Test/typeparam/chans.go
=== PAUSE Test/typeparam/chans.go
=== CONT Test/typeparam/chans.go
testdir_test.go:147: exit status 2
panic: _Ranger Send should have failed, but timed out
goroutine 1 gp=0x40000021c0 m=4 mp=0x4000080008 [running]:
panic({0xb95e0?, 0xfa808?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:806 +0x154 fp=0x40000a0ea0 sp=0x40000a0df0 pc=0x73914
main.TestRanger()
...
runtime.gcBgMarkWorker(0x4000180000)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mgc.go:1363 +0xdc fp=0x4000189fb0 sp=0x4000189f10 pc=0x257ec
runtime.gcBgMarkStartWorkers.gowrap1()
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mgc.go:1279 +0x28 fp=0x4000189fd0 sp=0x4000189fb0 pc=0x256d8
runtime.goexit({})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_arm64.s:1260 +0x4 fp=0x4000189fd0 sp=0x4000189fd0 pc=0x79ac4
created by runtime.gcBgMarkStartWorkers in goroutine 1
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mgc.go:1279 +0x140
--- FAIL: Test/typeparam/chans.go (60.52s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,641,143,641 | angular | Signal effect for an input signal gets triggered after writeValue in custom form component (control value accessor) | ### Which @angular/* package(s) are the source of the bug?
core, forms
### Is this a regression?
No
### Description
for a component that is implementing ControlValueAccessor interface, writeValue is being called **before** an effect for an input signal. This is inconsistent with the older way of using input setter or ngOnChanges to trigger some side effect at component initialisation.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-xhlf3j?file=test.component.ts
### Please provide the exception or error you saw
There is no error, just inconsistency with input setters/ngOnChanges which should be replaced by signals.
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 19.0.0-rc.1
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 19.0.0-rc.1
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.0-rc.1
@angular-devkit/build-angular 19.0.0-rc.1
@angular-devkit/core 19.0.0-rc.1
@angular-devkit/schematics 19.0.0-rc.1
@schematics/angular 19.0.0-rc.1
rxjs 7.8.1
typescript 5.5.4
zone.js 0.15.0
### Anything else?
**Use case why it would make sense:**
I am defining an empty form record. When the input is initialised with list of items:
1. **effect** is triggered to populate the form record with form controls reflecting the items
2. **writeValue** updates the form record value
Please note that I encountered this bug on version 18.2.10. Then I thought it might be fixed already with this [PR](https://github.com/angular/angular/pull/57874) so I tested it again with 19.0.0-rc.1
Angular CLI: 18.2.10
Node: 20.12.2
Package Manager: npm 10.5.0
OS: darwin x64
Angular: 18.2.9
... animations, cdk, common, compiler, compiler-cli, core, forms
... language-service, localize, material, platform-browser
... platform-browser-dynamic, platform-server, router
... service-worker
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.10
@angular-devkit/build-angular 18.2.9
@angular-devkit/core 18.2.9
@angular-devkit/schematics 18.2.9
@angular/cli 18.2.10
@schematics/angular 18.2.9
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10 | area: core,area: forms,cross-cutting: signals | low | Critical |
2,641,145,185 | langchain | Unable to stream structured output with ChatBedrock | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
class Joke(TypedDict):
"""Joke to tell user."""
setup: Annotated[str, ..., "The setup of the joke"]
punchline: Annotated[str, ..., "The punchline of the joke"]
rating: Annotated[Optional[int], None, "How funny the joke is, from 1 to 10"]
async def llm_stream(topic: str):
llm = ChatBedrock(
model=config.BEDROCK_LLM_MODEL_ID,
model_kwargs=dict(temperature=0.05, top_k=100, top_p=0.95),
provider="anthropic",
)
structured_llm = llm.with_structured_output(Joke)
async for chunk in structured_llm.astream(f"Tell me a really long joke story about {topic}"):
print(chunk)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Seems like there is something going wrong if you want to stream with structured output using ChatBedrock. Using ChatOpenAI works with the same code but ChatBedrock doesn't. Even I use `astream` or `stream` method, it just gives the output in one chunk and behaves like synchronous invocation.
The same code works for ChatOpenAI. Also tried to switch to json_schema, but doesn't help. If you remove structured_output, streaming works.
I guess for now, as a workaround, I can do some prompt engineering to get a json output.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020
> Python Version: 3.11.2 (main, Jul 19 2024, 17:09:07) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.12
> langchain: 0.3.4
> langsmith: 0.1.136
> langchain_aws: 0.2.2
> langchain_text_splitters: 0.3.0
> langchain_weaviate: 0.0.3
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> boto3: 1.35.36
> httpx: 0.27.0
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.9
> packaging: 24.1
> pydantic: 2.9.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> simsimd: 3.7.7
> SQLAlchemy: 2.0.36
> tenacity: 8.5.0
> typing-extensions: 4.12.2
> weaviate-client: 4.9.0 | investigate | low | Critical |
2,641,146,711 | PowerToys | ctrl + alt + '=(+)' shortcut not work | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Ctrl + Alt + '=(used together with +)' does not work
Ctrl + Alt + 'any other key' works.
Upon further inspection, it appears that the Ctrl + Alt keys are recognized, but pressing the =(+) key while in this state is not recognized. It seems to occur consistently, not just in specific programs.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,641,152,498 | svelte | Slide transition throws a "Invalid keyframe value for property height" warning since Svelte 5 | ### Describe the bug
Using Svelte 4 and before, this REPL custom Accordion with a slide effect was working perfectly. Since Svelte 5, it throws a warning:
> Invalid keyframe value for property height: NaNpx
I like the approach of this accordion using `{#key isOpen}` as it re-runs the transition and also keeps the content in the DOM, only playing with CSS to hide the content when closed.
I haven't made this REPL but adapted it to a similar component in my codebase.
### Reproduction
REPL with Svelte 5 (and the warning): https://svelte.dev/playground/a904286e5a4c497daa9e6f9351b3f3a9?version=5
REPL with Svelte 4 (and no warning): https://svelte.dev/playground/a904286e5a4c497daa9e6f9351b3f3a9?version=4
### Logs
```shell
> Invalid keyframe value for property height: NaNpx
```
### System Info
```shell
System:
OS: macOS 15.1
CPU: (12) arm64 Apple M2 Max
Memory: 258.97 MB / 32.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.18.0 - ~/.local/state/fnm_multishells/32005_1720871078955/bin/node
Yarn: 1.22.19 - ~/.local/state/fnm_multishells/32005_1720871078955/bin/yarn
npm: 10.9.0 - ~/.local/state/fnm_multishells/32005_1720871078955/bin/npm
pnpm: 8.5.0 - ~/.local/state/fnm_multishells/32005_1720871078955/bin/pnpm
bun: 1.1.34 - ~/.bun/bin/bun
Browsers:
Brave Browser: 130.1.71.118
Safari: 18.1
```
### Severity
annoyance | transition/animation | low | Critical |
2,641,168,363 | terminal | AtlasEngine: Incorrectly broken up MapCharacters runs should be joined before text shaping | ### Windows Terminal version
1.21.2911.0
### Windows build number
10.0.19045.0
### Other Software
_No response_
### Steps to reproduce
Print https://unicode.org/Public/emoji/latest/emoji-test.txt under Windows 10.
### Expected Behavior
All of the emojis work.
### Actual Behavior
All emojis with a U+200D joiner are broken. This is because DirectWrite's `MapCharacters()` is kinda bad and breaks them up. We then pass the broken runs to text shaping which are unable to see that the emoji consists of multiple parts joined together. This causes them to look incorrectly. We need to join the broken runs first and then pass the result to text shaping. | Area-Rendering,Issue-Bug,Product-Terminal,Priority-3,Area-AtlasEngine | low | Critical |
2,641,181,111 | react-native | Performance Issues with React Native Bridgeless Architecture | ### Description
The **Bridgeless Architecture** in React Native 0.74 and later introduces significant performance issues, particularly for apps with complex logic and high demands for UI responsiveness. Here are the main concerns:
## Key Issues
### 1. Screen Load Times
- **Problem**: Synchronous communication with the native runtime slows down screen loading, particularly when multiple API calls are made.
- **Impact**: Initial loads are bottlenecked, reducing overall app responsiveness.
### 2. UI Responsiveness and Touch Lag
- **Problem**: The switch to synchronous communication impacts touch response, causing a laggy and less fluid user experience.
- **Result**: Reduced interaction quality and increased user frustration.
### 3. Lazy Module Loading
- **Default Behavior**: Bridgeless architecture loads modules lazily by default, contributing to delayed responses in complex pages.
- **Partial Fix**: Disabling lazy loading improves response slightly but doesn’t fully solve the underlying delay issues.
## Comparative Performance with Older Versions
- **Testing Results**:
- **React Native 0.71.8** (asynchronous bridge): Modal with **32 API calls** loads in approximately **3 seconds**.
- **React Native 0.74.x / 0.76.1** (bridgeless): Modal with only **14 API calls** takes over **10 seconds**.
## Attempts to Improve Performance
- **Native Threads & Configuration Adjustments**: Shifting API calls to native threads, and enabling/disabling options like **Hermes** and **New Architecture**, failed to yield significant performance improvements.
## Call for Optimizations
The **React Native community** and core team are urged to provide optimizations or alternative solutions to enhance performance under the bridgeless architecture.
**Feedback Needed**: Community support and solutions for the architectural limitations causing these performance issues.
### Steps to reproduce
1. **Create a page/modal** with 10-30 API calls.
2. **Test in React Native 0.74.x** or later (Bridgeless).
3. **Measure load time** and UI responsiveness.
4. **Compare with 0.71.8** (Async Bridge).
Expected: Slower performance in the bridgeless versions.
### React Native Version
0.74+
### Affected Platforms
Runtime - Android, Runtime - iOS
### Areas
Bridgeless - The New Initialization Flow, Other (please specify)
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU: (8) arm64 Apple M1
Memory: 206.23 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.14.0
path: /usr/local/bin/node
Yarn:
version: 1.22.19
path: /usr/local/bin/yarn
npm:
version: 10.7.0
path: /usr/local/bin/npm
Watchman: Not Found
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK: Not Found
Android SDK: Not Found
IDEs:
Android Studio: 2021.3 AI-213.7172.25.2113.9123335
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 2.7.4
path: /Users/santhoshkumarvgds/.rvm/rubies/ruby-2.7.4/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0
wanted: ^15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: ^0.76.0
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
The issue occurs when multiple API calls are made on initial screen load. The app becomes unresponsive and takes significantly longer to load the screen. Despite attempts to optimize by disabling lazy loading and using native threads, the issue persists. No crash or stack trace is generated, but the app's performance degrades considerably.
```
### Reproducer
Reproducer: Unfortunately, I cannot provide a public repository or Expo Snack at this time. Please let me know if you would like further details or an isolated example of the issue.
### Screenshots and Videos
_No response_ | Needs: Author Feedback,Needs: Repro,Type: New Architecture,Needs: Version Info | high | Critical |
2,641,253,156 | ui | [bug]: Unknown command: "install$1$1" when initializing on existing Next.js 14 project | ### Describe the bug
I have an existing Next.js 14 project I'd like to add shadcn to. But whenever I try to initialize it with `npx shadcn@latest init`, it has this error: `Unknown command: "install$1$1"`. I tried to do the [Manual Installation](https://ui.shadcn.com/docs/installation/manual), but when I try to add a component using the CLI, it still yields the same error.
### Affected component/components
All
### How to reproduce
1. Run the command `npx shadcn@latest init` on an existing project, or `npx shadcn@latest add <component>` after doing a manual install.
2. Follow on-screen instructions. (I selected Default-Gray-Yes)
3. Error will occur after `Installing dependencies`
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
PS C:\Users\hikari\Desktop\Project\myProgram> npx shadcn@latest init
✔ Preflight checks.
✔ Verifying framework. Found Next.js.
✔ Validating Tailwind CSS.
✔ Validating import alias.
√ Which style would you like to use? » Default
√ Which color would you like to use as the base color? » Gray
√ Would you like to use CSS variables for theming? ... no / yes
✔ Writing components.json.
✔ Checking registry.
✔ Updating tailwind.config.js
✔ Updating src\app\globals.css
⠧ Installing dependencies.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Command failed with exit code 1: npm install tailwindcss-animate class-variance-authority lucide-react clsx tailwind-merge
Unknown command: "install$1$1"
Did you mean this?
npm install # Install a package
To see a list of supported npm commands, run:
npm help
```
### System Info
```bash
Windows 11 64-bit
Node version: v22.6.0
npm version: 10.9.0
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,641,257,213 | pytorch | Default value for get_num_threads | ### 📚 The doc issue
Hello.
I was taking a look at the document about CPU threading that torch internal uses, referring the link : https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html
Here, at the "Runtime API" tab, it says for both intra-op and inter-op parallelism, the default number of threads is set to the number of CPU cores.

However, if I run the **at::get_num_threads** command in libtorch and **get_num_threads** in pytorch, it shows different results. The scripts are below


Above python usage reports 20 threads, but libtorch reports 40 threads. My machine has 20 CPU cores, and I expect that libtorch somehow reported the total number of hyperthreads, as my machine has 2 hyperthreads per core.
Below is my setups
- python 3.10.14
- pytorch 2.3.1.+cu121
- libtorch cxx11ABI version
- g++ 10.5.0
Thanks
### Suggest a potential alternative/fix
Update related part of the document
cc @albanD | triaged,module: multithreading | low | Minor |
2,641,340,058 | rust | Add a regression test for #132587 | #132587 partially reverted #129346 because that caused a test failure in cfg_eval, but the revert PR didn't come with a test because it involved proc-macros and was hard to minimize. We should still add a regression test.
@rustbot label +E-needs-test
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"yegeunyang"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-parser,E-needs-test,T-compiler,E-medium,C-bug,A-proc-macros | low | Critical |
2,641,344,432 | flutter | DateTimePicker date buttons fail touch target size accessibility checks | ### Steps to reproduce
Internal bug b/377876657
1. Write some UI that invokes `showDatePicker`
2. Use an accessibility checker on the resulting UI, such as
```dart
await expectLater(
tester,
meetsGuideline(androidTapTargetGuideline),
);
```
### Expected results
All user-interactable elements should have a minimum touch target size of at least 48dp x 48dp, as described at https://support.google.com/accessibility/android/answer/7101858.
### Actual results
The size of the individual day buttons is less than 48x48
### Code sample
<details open><summary>Code sample</summary>
App code:
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData.light(useMaterial3: true),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text(widget.title)),
body: Center(child: Text('Body')),
floatingActionButton: FloatingActionButton(
onPressed: () {
showDatePicker(
context: context,
initialDate: DateTime.now(),
firstDate: DateTime(2024),
lastDate: DateTime(2026),
);
},
tooltip: 'Increment',
),
);
}
}
```
Test code:
```dart
import 'package:flutter/material.dart';
import 'package:flutter_test/flutter_test.dart';
// Import for file above
import 'package:hello_flutter.app/main.dart';
void main() {
testWidgets('test', (WidgetTester tester) async {
await tester.pumpWidget(MyApp());
final button = find.byType(FloatingActionButton);
await tester.tap(button);
await tester.pumpAndSettle();
expect(find.byType(DatePickerDialog), findsOneWidget);
await expectLater(
tester,
meetsGuideline(androidTapTargetGuideline),
);
});
}
```
</details>
### Screenshots or Video
Screenshots / Video demonstration: N/A
### Logs
Logs: N/A
### Flutter Doctor output
<details open><summary>Doctor output</summary>
N/A, Google internal client
</details>
| framework,f: material design,a: accessibility,f: date/time picker,has reproducible steps,P2,team-accessibility,triaged-accessibility,found in release: 3.27 | low | Critical |
2,641,352,090 | vscode | Copilot Inline chat hunks don't disappear when rejected | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.1
- OS Version: Linux
Steps to Reproduce:
1. Open the inline chat (ctrl +I) and sent a prompt that generates edits in multiple hunks
2. Discard one of the hunks
Instead of undoing the previous hunk edit when discarding it, the hunk is still visible and decorated as a diff, see screenshot:

| bug,inline-chat | low | Critical |
2,641,353,412 | excalidraw | Vertical text with upright direction | It seems all excalidraw's text is horizontal.
Is it possible to support change text orientation to vertical + upright?
For example:

| enhancement,Text rendering,text wrapping | low | Minor |
2,641,369,709 | vscode | Debt: Cleanup Tree Find (`FindController`, `FindFilter` and toggles) | `AsyncFindController` has a weird dependency on `FindController`. Some functionality should be moved down to `AbstractFindController` so that `AsyncFindController` does not need to extend `FindController`.
Also need to look into toggles. Maybe the toggles should be given from the start to the filter and then the filter should listen on event from it...
related [#232964](https://github.com/microsoft/vscode/pull/232964) | debt,tree-widget | low | Minor |
2,641,438,376 | godot | Attenuation model of AudioStreamPlayer3D breaks when using AnimationPlayer with web build | ### Tested versions
- Reproducible v4.3.stable.official [77dcf97d8]
### System information
Windows 10 - v4.3.stable.official [77dcf97d8]
### Issue description
Attenuation model emits sound globally with max_distance setting only on the web build
### Steps to reproduce
1) Create AudioStreamPlayer3D, AnimationPlayer
2) Add Audio Playback track to animation and play animation with AnimationPlayer
3) Configure moving camera to check if max_distance and attenuation model works properly
4) Build for web, sound is emitted globally for some reason.
Compare Web build to debug play, you will see that's only web build issue
### Minimal reproduction project (MRP)
[TestAudioStream3D.zip](https://github.com/user-attachments/files/17664232/TestAudioStream3D.zip)
| bug,platform:web,topic:audio | low | Critical |
2,641,467,589 | rust | Nightly `rustc-dev` can't be used in cross-compilation | ## Bug
I've made a minimal reproducible repo: https://github.com/tombh/nightly_cross_minimal_repro
Pasting `main.rs`:
```rs
#![feature(rustc_private)]
extern crate rustc_ast;
extern crate rustc_driver;
extern crate rustc_macros;
fn main() {
println!("{}", rustc_ast::node_id::CRATE_NODE_ID)
}
```
The build commands are here: https://github.com/tombh/nightly_cross_minimal_repro/blob/main/.github/workflows/ci.yml
Pasting the most relevant part:
```sh
rustup toolchain install nightly-2024-09-30
rustup default nightly-2023-09-30
rustup target add aarch64-unknown-linux-gnu
rustup component add rust-src rustc-dev-aarch64-unknown-linux-gnu llvm-tools
...
cargo -v build --locked --release --target aarch64-unknown-linux-gnu
```
And here is the failing build: https://github.com/tombh/nightly_cross_minimal_repro/actions/runs/11463471346/job/31897390824#step:5:9
Also, just out of interest, here is a failing build with a more recent nightly (`nightly-2024-10-21`), it fails differently, it can't find `derive_where`: https://github.com/tombh/nightly_cross_minimal_repro/actions/runs/11463505291/job/31897508604#step:5:9
## Relevant Issues
- The bug is being discussed in a `rustup` issue: https://github.com/rust-lang/rustup/issues/3255. That is where it was suggested that this is rather a `rust` or `cargo` issue.
- #62447
- #70838 | A-cross,T-infra,C-bug | low | Critical |
2,641,481,697 | pytorch | Find ninja from local environment in `torch.utils.cpp_extension.load` | ### 🚀 The feature, motivation and pitch
I'am writing a script, which use `torch.utils.cpp_extension.load`, so ninja is needed to compile the code. And I have already added ninja as the dependence, so it seems work well. However, If I use [pipx](https://pipx.pypa.io/) to install my script, all of my script, torch and ninja are installed into a local environment and the only entrypoint is the script, and then `torch.utils.cpp_extension.load` report ninja not found since ninja does not exist in `$PATH`, however, it can be invoked by `[sys.executable, "-m", "ninja"]` easily in fact, but I cannot modify or pass argument to it so currently I have to install ninja globally.
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @zou3519 @xmfan | module: cpp-extensions,triaged | low | Minor |
2,641,490,156 | excalidraw | GIF PNG ANIMATED | I would like Excalidraw to allow you to add animated GIFs or PNGs, so that the animation runs in a loop (as determined in the file).
Also pay attention to the transparent background, as I have noticed that gifs with a transparent background have been imported with a black background. | enhancement | low | Minor |
2,641,517,557 | PowerToys | Windows 11 24H2 Powertoys icon not in system tray | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
1. Restart Windows 11 24H2
2. Wait for system to completely start
3. Look in system tray to ensure that all startup apps have started
### ✔️ Expected Behavior
After system restart, I expect Powertoys and Awake to appear in my system tray
### ❌ Actual Behavior
Neither Awake, nor Powertoys appear in the system tray
[PowerToys.zip](https://github.com/user-attachments/files/17666341/PowerToys.zip)
[PowerToysReport_2024-11-07-11-10-03.zip](https://github.com/user-attachments/files/17665874/PowerToysReport_2024-11-07-11-10-03.zip)
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | medium | Major |
2,641,519,474 | flutter | `BottomNavigationBarItem.tooltip` is not displayed when set using Cupertino components | ### Steps to reproduce
Run the code and hover on the bottom destinations.
### Expected results
Tooltip should be shown when hovering a destination.
### Actual results
Tooltip isnt shown when hovering the destinations.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/cupertino.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return CupertinoApp(
title: 'Cupertino Tab Bar with Tooltips',
theme: CupertinoThemeData(
primaryColor: CupertinoColors.systemBlue,
),
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _selectedIndex = 0;
@override
Widget build(BuildContext context) {
return CupertinoTabScaffold(
tabBar: CupertinoTabBar(
items: [
BottomNavigationBarItem(
icon: Icon(CupertinoIcons.home),
label: 'Home',
tooltip: "I am not displayed",
),
BottomNavigationBarItem(
icon: Icon(CupertinoIcons.search),
label: 'Search',
tooltip: "I am not displayed",
),
BottomNavigationBarItem(
icon: Icon(CupertinoIcons.settings),
label: 'Settings',
tooltip: "I am not displayed",
),
],
currentIndex: _selectedIndex,
onTap: (index) {
setState(() {
_selectedIndex = index;
});
},
),
tabBuilder: (context, index) {
return CupertinoTabView(
builder: (context) {
return CupertinoPageScaffold(
navigationBar: CupertinoNavigationBar(
middle: Text('Tab ${index + 1}'),
),
child: Center(
child: Text(
'Selected Tab: ${index + 1}',
style: CupertinoTheme.of(context)
.textTheme
.textStyle
.copyWith(fontSize: 20),
),
),
);
},
);
},
);
}
}
```
</details>
### Screenshots or Video
<details close>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details close><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details close><summary>Doctor output</summary>
Running on dartpad: Based on Dart SDK 3.5.4 and Flutter SDK 3.24.4
</details>
| framework,f: cupertino,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.27 | low | Major |
2,641,552,397 | pytorch | ghstack-mergebility-check often gives false positives | ### 🐛 Describe the bug
I rarely use stacks, but I've noticed in a few that I've created recently, that check-mergeability often (one out often runs) reports some failure, while stack is perfectly mergeable, for example https://github.com/pytorch/pytorch/actions/runs/11715232896/job/32631261947
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | module: ci,triaged | low | Critical |
2,641,554,003 | electron | navigator.serial.requestPort() takes 25000 ms into the start of the application | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.1.0
### What operating system(s) are you using?
Ubuntu
### Operating System Version
Ubuntu 24.04
### What arch are you using?
x64
### Last Known Working Electron version
29.4.6
### Expected Behavior
I would expect `navigator.serial.requestPort(...)` to return almost immediately.
### Actual Behavior
`navigator.serial.requestPort(...)` takes exactly 25000 ms into the start of the application to return.
### Testcase Gist URL
https://github.com/iblue/electron-serial-bug
### Additional Information
`navigator.serial.requestPort(...)` takes exactly 25000 ms into the start of the application to return. If you run the PoC and wait 20 seconds, then press the button, it takes 5 seconds.
That is, unless I connect a Bluetooth USB adapter to my machine, then it is instantaneous. It was fast until electron 29.4.6 and stopped working in 30.0.0. It occurs in all versions up to the current one. I suspect it has something to do with this commit: https://github.com/electron/electron/pull/41734.
In order to reproduce the issue, disconnect all Bluetooth adapters from your PC and restart. If a Bluetooth adapter has been connected during the currently running OS session, `navigator.serial.requestPort(...)` returns instantly. I did not test it on Windows, but I suspect you need Ubuntu 24.04 to reproduce the issue.
Code to reproduce can be found here, it is basically the [Web Serial Example](https://github.com/electron/electron/tree/v33.1.0/docs/fiddles/features/web-serial). Probably you need to insert a vendor ID and Product ID into `renderer.js`: https://github.com/iblue/electron-serial-bug
I hope, someone can reproduce this. | platform/linux,bug :beetle:,status/reviewed,web-platform-apis,33-x-y | low | Critical |
2,641,563,827 | pytorch | TargetDeterminator skips op_info tests for operator directly changed there | ### 🐛 Describe the bug
See https://github.com/pytorch/pytorch/pull/139959 / https://github.com/pytorch/pytorch/pull/139959/commits/f2eebf62fec2d13f5f1339bd06f7856b6d629b0b that reports clear signal, even though `python3 test/test_ops.py -v -k test_dtypes_nn_functional_mse_loss_cpu` should have been run and fail with `The following dtypes worked in forward but are not listed by the OpInfo: {torch.bfloat16}.`, which for some reason were run in the next PR on the stack, see https://github.com/pytorch/pytorch/actions/runs/11715233003/job/32631720183
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | module: ci,triaged | low | Critical |
2,641,606,920 | rust | Remove support for `extern "rust-intrinsic"` blocks | We currently have two ways to declare symbols that are invoked as intrinsics. The old way:
```rust
extern "rust-intrinsic" {
fn unreachable() -> !;
}
```
The new way, which supports giving a "fallback body" that will be used for backends that do not have the intrinsic implemented:
```rust
#[rustc_intrinsic]
#[rustc_intrinsic_must_be_overridden]
unsafe fn unreachable() -> ! { unreachable!() }
```
The goal of this issue is to remove support for the old style, and consistently use the new style.
1. Port the remaining `extern "rust-intrinsic"` intrinsics in `library` to the new style, updating them using the pattern described above. This can be a PR on its own.
2. Port the uses of `extern "rust-intrinsic"` in `src/tools/miri` and `tests/ui/simd` to the new style. In fact, these can use the even newer style (which can't be used in `library` yet because of bootstraping):
```rust
#[rustc_intrinsic]
unsafe fn unreachable();
```
3. Remove support for `extern "rust-intrinsic"` blocks from the compiler -- in particular, remove [this](https://github.com/rust-lang/rust/blob/eca17022ef267f9ed87ba7d22755a1bfe0082e80/compiler/rustc_abi/src/extern_abi/mod.rs#L203-L206). AFAIK these are also the only extern blocks that support generics, so there might be more things that can be cleaned up here. (@compiler-errors or @oli-obk might know more about that.) A bunch of tests will still need updating; you can grep for `rust-intrinsic` to find them all. They should all be ported to the new style, similar to the PR in step 2.<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"BLANKatGITHUB"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | E-easy,C-cleanup,T-compiler,T-libs | medium | Critical |
2,641,615,197 | ollama | llama3.2-vision projector_info vision encoder absence | ### What is the issue?
How can I definitively identify a model as vision-compatible
without relying on keywords like "vision," "llava," or "-v" in its name?
I used to rely on the projector_info.has_vision_encoder parameter,
from API request POST http://localhost:11434/api/show (with correct body),
but it's absent in llama3.2-vision.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0 | feature request,api | low | Minor |
2,641,636,354 | flutter | ImagePicker Android release crash : FileUtils.java - Volume external_primary not found | ### Steps to reproduce
Here is the crash log from my Crashlytics dashboard :
<img width="493" alt="Capture d’écran 2024-11-07 à 17 53 33" src="https://github.com/user-attachments/assets/cb9f4e82-80db-4bba-a811-097b889da1d9">
```
Fatal Exception: java.lang.IllegalArgumentException: Volume external_primary not found
at android.database.DatabaseUtils.readExceptionFromParcel(DatabaseUtils.java:172)
at android.database.DatabaseUtils.readExceptionWithFileNotFoundExceptionFromParcel(DatabaseUtils.java:153)
at android.content.ContentProviderProxy.openTypedAssetFile(ContentProviderNative.java:781)
at android.content.ContentResolver.openTypedAssetFileDescriptor(ContentResolver.java:2014)
at android.content.ContentResolver.openAssetFileDescriptor(ContentResolver.java:1813)
at android.content.ContentResolver.openInputStream(ContentResolver.java:1490)
at io.flutter.plugins.imagepicker.FileUtils.getPathFromUri(FileUtils.java:55)
at io.flutter.plugins.imagepicker.ImagePickerDelegate.getPathsFromIntent(ImagePickerDelegate.java:673)
at io.flutter.plugins.imagepicker.ImagePickerDelegate.handleChooseImageResult(ImagePickerDelegate.java:684)
at io.flutter.plugins.imagepicker.ImagePickerDelegate.lambda$onActivityResult$0(ImagePickerDelegate.java:615)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:923)
```
Unfortunately i can't reproduce it, it happened on my Android live app and i have the error log in Crashlytics.
I can provide the Android configuration where the crash occured :
<img width="1048" alt="Capture d’écran 2024-11-07 à 17 52 09" src="https://github.com/user-attachments/assets/01b02d0e-c35e-498c-97d9-2c9d5904edc2">
### Expected results
The app should not crash.
### Actual results
The app is crashing.
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Fatal Exception: java.lang.IllegalArgumentException: Volume external_primary not found
at android.database.DatabaseUtils.readExceptionFromParcel(DatabaseUtils.java:172)
at android.database.DatabaseUtils.readExceptionWithFileNotFoundExceptionFromParcel(DatabaseUtils.java:153)
at android.content.ContentProviderProxy.openTypedAssetFile(ContentProviderNative.java:781)
at android.content.ContentResolver.openTypedAssetFileDescriptor(ContentResolver.java:2014)
at android.content.ContentResolver.openAssetFileDescriptor(ContentResolver.java:1813)
at android.content.ContentResolver.openInputStream(ContentResolver.java:1490)
at io.flutter.plugins.imagepicker.FileUtils.getPathFromUri(FileUtils.java:55)
at io.flutter.plugins.imagepicker.ImagePickerDelegate.getPathsFromIntent(ImagePickerDelegate.java:673)
at io.flutter.plugins.imagepicker.ImagePickerDelegate.handleChooseImageResult(ImagePickerDelegate.java:684)
at io.flutter.plugins.imagepicker.ImagePickerDelegate.lambda$onActivityResult$0(ImagePickerDelegate.java:615)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:923)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.4, on macOS 15.1 24B83 darwin-x64, locale
fr-FR)
• Flutter version 3.24.4 on channel stable at
/Users/foxtom/Desktop/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 603104015d (2 weeks ago), 2024-10-24 08:01:25 -0700
• Engine revision db49896cf2
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/foxtom/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915915-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915915-b509.11)
[✓] VS Code (version 1.95.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (4 available)
• Now You See Me (mobile) • 00008020-001204401E78002E • ios • iOS
18.1 22B83
• macOS (desktop) • macos • darwin-x64 •
macOS 15.1 24B83 darwin-x64
• Chrome (web) • chrome • web-javascript •
Google Chrome 130.0.6723.117
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,platform-android,p: image_picker,package,a: production,P2,needs repro info,team-android,triaged-android | low | Critical |
2,641,643,733 | go | cmd/go: clarify documentation for DefaultGODEBUG | The documentation for DefaultGODEBUG at https://go.dev/doc/godebug says "Only differences from the base Go toolchain defaults are reported.", but that's not the case: consider that x/tools/cmd/eg sets `gotypesalias=1` in a `//go:debug` directive. Yet:
- `GOTOOLCHAIN=go1.22.0 go list -f {{.DefaultGODEBUG}} ./cmd/eg` returns nothing
- `GOTOOLCHAIN=go1.23.1 go list -f {{.DefaultGODEBUG}} ./cmd/eg` reports `gotypesalias=1`, even though that's the default
I will send a fix to ./cmd/go/internal/load/godebug.go.
CC @rsc @timothy-king @adonovan | Documentation,NeedsInvestigation,GoCommand | low | Critical |
2,641,646,175 | stable-diffusion-webui | [Bug]: DMD2_SDXL_4step LoRA significantly increases generation time with SDXL model | ### Checklist
- [x] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [x] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
### Issue Description
When using DMD2_SDXL_4step LoRA with SDXL 1.0 base model, the image generation time increases significantly from ~10 seconds to several minutes.
### Environment
- WebUI: 82a973c (2024-07-27)
- Base Model: stable-diffusion-xl-base-1.0 (https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main sd_xl_base_1.0.safetensors)
- LoRA: DMD2_SDXL_4step (https://huggingface.co/tianweiy/DMD2 Lora file: dmd2_sdxl_4step_lora.safetensors)
- GPU: GeForce RTX 4090
- Launch Arguments: --theme dark --xformers --no-half-vae --api --autolaunch --skip-python-version-check
### Steps to Reproduce
1. Load SDXL 1.0 base model
2. Generate an image without LoRA (takes ~10 seconds)
3. Add DMD2_SDXL_4step LoRA
4. Generate the same image (now takes several minutes)
### Expected Behavior
DMD2_SDXL_4step LoRA should accelerate the generation process.
### Actual Behavior
Generation time increases significantly when using the LoRA.


### Steps to reproduce the problem
### Steps to Reproduce
1. Load SDXL 1.0 base model
2. Generate an image without LoRA (takes ~10 seconds)
3. Add DMD2_SDXL_4step LoRA
4. Generate the same image (now takes several minutes)
### What should have happened?
### Expected Behavior
DMD2_SDXL_4step LoRA should accelerate the generation process.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo

### Console logs
```Shell
SD-WebUI Launcher Diagnostic File
Date: 2024-11-08 00:56:26
Launcher Version: 2.8.10.387
Data File Version: 2024-10-27 12:33
SD-WebUI Version: 82a973c04367123ae98bd9abdf80d9eda9b910e2 (2024-07-27 20:49:39)
Working Directory: C:\Users\vipuser\Desktop\sd-webui-aki\sd-webui-aki-v4.9.1
------------------------
System Information:
OS: Microsoft Windows NT 10.0.19044.0
CPU: 12 cores
Memory Size: 29839 MB
Page File Size: 12104 MB
NVIDIA Management Library:
NVIDIA Driver Version: 528.49
NVIDIA Management Library Version: 12.528.49
CUDA Driver:
Version: 12000
Devices:
00000000:00:08.0 0: NVIDIA GeForce RTX 4090 [89] 23 GB
NvApi:
Version: 52849 r528_37
HIP Driver:
Not Available
DirectML Driver:
Devices:
9860 0: NVIDIA GeForce RTX 4090 23 GB
9860 1: NVIDIA GeForce RTX 4090 23 GB
140 2: Microsoft Basic Render Driver 0 GB
9860 3: NVIDIA GeForce RTX 4090 23 GB
Intel Level Zero Driver:
Not Available
------------------------
Environment Variables:
FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer
TEMP=C:\Users\vipuser\AppData\Local\Temp
USERPROFILE=C:\Users\vipuser
TMP=C:\Users\vipuser\AppData\Local\Temp
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
SESSIONNAME=RDP-Tcp#2
PUBLIC=C:\Users\Public
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
PROCESSOR_LEVEL=6
CLIENTNAME=EASTBLUEE499
NUMBER_OF_PROCESSORS=12
LOCALAPPDATA=C:\Users\vipuser\AppData\Local
DriverData=C:\Windows\System32\Drivers\DriverData
SystemDrive=C:
USERDOMAIN_ROAMINGPROFILE=galaxy-GUZaFCY
ALLUSERSPROFILE=C:\ProgramData
ComSpec=C:\Windows\system32\cmd.exe
CommonProgramFiles=C:\Program Files\Common Files
FPS_BROWSER_USER_PROFILE_STRING=Default
ProgramFiles=C:\Program Files
Path=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\Git\cmd;C:\ProgramData\Anaconda3;C:\ProgramData\Anaconda3\Lib\site-packages\win32;C:\ProgramData\Anaconda3\Lib\site-packages\pywin32_system32;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\Program Files\dotnet\;C:\Users\vipuser\Desktop\ffmpeg-2024-10-10-git-0f5592cfc7-full_build\bin;C:\Users\vipuser\AppData\Local\Programs\Python\Python38\Scripts\;C:\Users\vipuser\AppData\Local\Programs\Python\Python38\;C:\Users\vipuser\AppData\Local\Microsoft\WindowsApps;;C:\Users\vipuser\AppData\Local\Programs\Microsoft VS Code\bin
OS=Windows_NT
APPDATA=C:\Users\vipuser\AppData\Roaming
ProgramFiles(x86)=C:\Program Files (x86)
HOMEDRIVE=C:
PROCESSOR_REVISION=8f08
PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules
SystemRoot=C:\Windows
HOMEPATH=\Users\vipuser
COMPUTERNAME=galaxy-GUZaFCY
ProgramData=C:\ProgramData
windir=C:\Windows
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 143 Stepping 8, GenuineIntel
CommonProgramW6432=C:\Program Files\Common Files
ProgramW6432=C:\Program Files
PROCESSOR_ARCHITECTURE=AMD64
USERNAME=vipuser
USERDOMAIN=galaxy-GUZaFCY
LOGONSERVER=\\galaxy-GUZaFCY
------------------------
Log:
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --theme dark --xformers --no-half-vae --api --autolaunch --skip-python-version-check
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.9
.
0, num models: 10
ControlNet preprocessor location: C:\Users\vipuser\Desktop\sd-webui-aki\sd-webui-aki-v4.9.1\extensions\sd-webui-controlnet\annotator\downloads
2024-11-08 00:26:40,184 - ControlNet - [0;32mINFO[0m - ControlNet v1.1.455
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [31e35c80fc] from C:\Users\vipuser\Desktop\sd-webui-aki\sd-webui-aki-v4.9.1\models\Stable-diffusion\sdxl1.0\sd_xl_base_1.0.safetensors
[LyCORIS]-[0;33mWARNING[0m: LyCORIS legacy extension is now loaded, if you don't expext to see this message, please disable this extension.
Creating model from config: C:\Users\vipuser\Desktop\sd-webui-aki\sd-webui-aki-v4.9.1\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: xformers... done.
WARNING:py.warnings:C:\Users\vipuser\Desktop\sd-webui-aki\sd-webui-aki-v4.9.1\python\lib\site-packages\torch\nn\functional.py:5504: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
Model loaded in 12.1s (load weights from disk: 0.4s, load config: 0.4s, create model: 2.0s, apply weights to model: 8.3s, apply half(): 0.1s, move model to device: 0.3s, calculate empty prompt: 0.4s).
2024-11-08 00:26:55,575 - ControlNet - [0;32mINFO[0m - ControlNet UI callback registered.
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
[92mIIB Database file has been successfully backed up to the backup folder.[0m
Startup time: 64.6s (prepare environment: 18.5s, import torch: 10.5s, import gradio: 3.2s, setup paths: 1.6s, initialize shared: 0.6s, other imports: 1.7s, list SD models: 0.1s, load scripts: 8.0s, create ui: 14.5s, gradio launch: 3.3s, add APIs: 2.0s, app_started_callback: 0.4s).
{"prompt": "the doctor is treating the patient,", "all_prompts": ["the doctor is treating the patient,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 1159653272, "all_seeds": [1159653272], "subseed": 3150971306, "all_subseeds": [3150971306], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "DPM++ 2M", "cfg_scale": 7, "steps": 20, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {"Schedule type": "Karras"}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,\nSteps: 20, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 7, Seed: 1159653272, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108002938", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,", "all_prompts": ["the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 1372934445, "all_seeds": [1372934445], "subseed": 2811219341, "all_subseeds": [2811219341], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "LCM", "cfg_scale": 7, "steps": 6, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {"Schedule type": "Karras"}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,\nSteps: 6, Sampler: LCM, Schedule type: Karras, CFG scale: 7, Seed: 1372934445, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003312", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,", "all_prompts": ["the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 464245114, "all_seeds": [464245114], "subseed": 1711710803, "all_subseeds": [1711710803], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "LCM", "cfg_scale": 7, "steps": 6, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,\nSteps: 6, Sampler: LCM, Schedule type: Automatic, CFG scale: 7, Seed: 464245114, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003419", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,", "all_prompts": ["the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 1608476397, "all_seeds": [1608476397], "subseed": 3075197645, "all_subseeds": [3075197645], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "LCM", "cfg_scale": 1, "steps": 6, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,\nSteps: 6, Sampler: LCM, Schedule type: Automatic, CFG scale: 1, Seed: 1608476397, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003508", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,", "all_prompts": ["the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 3853543390, "all_seeds": [3853543390], "subseed": 813517798, "all_subseeds": [813517798], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "LCM", "cfg_scale": 1, "steps": 6, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,<lora:dmd2_sdxl_4step_lora:1>,\nSteps: 6, Sampler: LCM, Schedule type: Automatic, CFG scale: 1, Seed: 3853543390, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003556", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,", "all_prompts": ["the doctor is treating the patient,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 2808798004, "all_seeds": [2808798004], "subseed": 1746927857, "all_subseeds": [1746927857], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "LCM", "cfg_scale": 1, "steps": 6, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,\nSteps: 6, Sampler: LCM, Schedule type: Automatic, CFG scale: 1, Seed: 2808798004, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003709", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,", "all_prompts": ["the doctor is treating the patient,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 3580553760, "all_seeds": [3580553760], "subseed": 3937475132, "all_subseeds": [3937475132], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "LCM", "cfg_scale": 1, "steps": 6, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,\nSteps: 6, Sampler: LCM, Schedule type: Automatic, CFG scale: 1, Seed: 3580553760, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003723", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,", "all_prompts": ["the doctor is treating the patient,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 3002655950, "all_seeds": [3002655950], "subseed": 4033832068, "all_subseeds": [4033832068], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "LCM", "cfg_scale": 1, "steps": 20, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,\nSteps: 20, Sampler: LCM, Schedule type: Automatic, CFG scale: 1, Seed: 3002655950, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003734", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,", "all_prompts": ["the doctor is treating the patient,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 2250936571, "all_seeds": [2250936571], "subseed": 3628021471, "all_subseeds": [3628021471], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "DPM++ 2M", "cfg_scale": 1, "steps": 20, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {"Schedule type": "Karras"}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,\nSteps: 20, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 1, Seed: 2250936571, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003755", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,", "all_prompts": ["the doctor is treating the patient,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 2441762049, "all_seeds": [2441762049], "subseed": 1188052010, "all_subseeds": [1188052010], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "DPM++ 2M", "cfg_scale": 7, "steps": 20, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {"Schedule type": "Karras"}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,\nSteps: 20, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 7, Seed: 2441762049, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003814", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
{"prompt": "the doctor is treating the patient,", "all_prompts": ["the doctor is treating the patient,"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 2947229270, "all_seeds": [2947229270], "subseed": 135279198, "all_subseeds": [135279198], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "DPM++ 2M", "cfg_scale": 7, "steps": 20, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "sd_xl_base_1.0", "sd_model_hash": "31e35c80fc", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {"Schedule type": "Karras"}, "index_of_first_image": 0, "infotexts": ["the doctor is treating the patient,\nSteps: 20, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 7, Seed: 2947229270, Size: 512x512, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Clip skip: 2, Version: v1.10.1"], "styles": [], "job_timestamp": "20241108003824", "clip_skip": 2, "is_using_inpainting_conditioning": false, "version": "v1.10.1"}
```
### Additional information
_No response_ | bug-report | low | Critical |
2,641,749,025 | kubernetes | NUMA-aware memory manager and Topology Manager policy of "restricted" results in TopologyAffinityError when it shouldn't | ### What happened?
Running K8s 1.29.2, with the kubelet NUMA-aware memory manager policy set to "Static" and the Topology Manager policy set to "restricted".
1. The /var/lib/kubelet/memory_manager_state file shows:
```
[sysadmin@controller-0 pods(keystone_admin)]$ sudo cat /var/lib/kubelet/memory_manager_state
Password:
{"policyName":"Static","machineState":{"0":{"numberOfAssignments":0,"memoryMap":{"hugepages-1Gi":{"total":10737418240,"systemReserved":0,"allocatable":10737418240,"reserved":0,"free":10737418240},"hugepages-2Mi":{"total":0,"systemReserved":0,"allocatable":0,"reserved":0,"free":0},"memory":{"total":99759243264,"systemReserved":10538188800,"allocatable":78483636224,"reserved":0,"free":78483636224}},"cells":[0]},"1":{"numberOfAssignments":0,"memoryMap":{"hugepages-1Gi":{"total":10737418240,"systemReserved":0,"allocatable":10737418240,"reserved":0,"free":10737418240},"hugepages-2Mi":{"total":0,"systemReserved":0,"allocatable":0,"reserved":0,"free":0},"memory":{"total":99283886080,"systemReserved":1101004800,"allocatable":87445463040,"reserved":0,"free":87445463040}},"cells":[1]}},"checksum":2552710201}[sysadmin@controller-0 pods(keystone_admin)]$
```
2. Created a pod kube-mgrr-1, then checked pod status and memory_manager_state file content
```
[sysadmin@controller-0 pods(keystone_admin)]$ cat test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-mgrr-1
labels:
app.starlingx.io/component: 'application'
spec:
containers:
- image: "gcr.io/kubernetes-e2e-test-images/resource-consumer:1.4"
imagePullPolicy: IfNotPresent
name: kube-mgrr-1
resources:
limits:
memory: 256Mi
cpu: '200m'
requests:
memory: 256Mi
cpu: '200m'
nodeSelector:
kubernetes.io/hostname: controller-0
[sysadmin@controller-0 pods(keystone_admin)]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kube-mgrr-1 1/1 Running 0 7s
[sysadmin@controller-0 pods(keystone_admin)]$ sudo cat /var/lib/kubelet/memory_manager_state
{"policyName":"Static","machineState":{"0":{"numberOfAssignments":1,"memoryMap":{"hugepages-1Gi":{"total":10737418240,"systemReserved":0,"allocatable":10737418240,"reserved":0,"free":10737418240},"hugepages-2Mi":{"total":0,"systemReserved":0,"allocatable":0,"reserved":0,"free":0},"memory":{"total":99759243264,"systemReserved":10538188800,"allocatable":78483636224,"reserved":268435456,"free":78215200768}},"cells":[0]},"1":{"numberOfAssignments":0,"memoryMap":{"hugepages-1Gi":{"total":10737418240,"systemReserved":0,"allocatable":10737418240,"reserved":0,"free":10737418240},"hugepages-2Mi":{"total":0,"systemReserved":0,"allocatable":0,"reserved":0,"free":0},"memory":{"total":99283886080,"systemReserved":1101004800,"allocatable":87445463040,"reserved":0,"free":87445463040}},"cells":[1]}},"entries":{"84accdfa-8263-4672-9bfe-2ffd93427666":{"kube-mgrr-1":[{"numaAffinity":[0],"type":"memory","size":268435456}]}},"checksum":989528933}[sysadmin@controller-0 pods(keystone_admin)]$
```
3. Created a new pod which exceeds memory of NUMA node 0 and check if the pod is pinned to both NUMA nodes.
```
[sysadmin@controller-0 pods(keystone_admin)]$ cat test-pod_2.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-mgrr-2
labels:
app.starlingx.io/component: 'application'
spec:
containers:
- image: "gcr.io/kubernetes-e2e-test-images/resource-consumer:1.4"
imagePullPolicy: IfNotPresent
name: kube-mgrr-2
resources:
limits:
memory: 85Gi
cpu: '200m'
requests:
memory: 85Gi
cpu: '200m'
nodeSelector:
kubernetes.io/hostname: controller-0
[sysadmin@controller-0 pods(keystone_admin)]$ kubectl apply -f test-pod_2.yaml
pod/kube-mgrr-2 created
[sysadmin@controller-0 pods(keystone_admin)]$ kubectl get pods NAME READY STATUS RESTARTS AGE kube-mgrr-1 1/1 Running 0 109s kube-mgrr-2 1/1 Running 0 3s
[sysadmin@controller-0 pods(keystone_admin)]$ sudo cat /var/lib/kubelet/memory_manager_state {"policyName":"Static","machineState":{"0":{"numberOfAssignments":2,"memoryMap":{"hugepages-1Gi":{"total":10737418240,"systemReserved":0,"allocatable":10737418240,"reserved":0,"free":10737418240},"hugepages-2Mi":{"total":0,"systemReserved":0,"allocatable":0,"reserved":0,"free":0},"memory":{"total":99759243264,"systemReserved":10538188800,"allocatable":78483636224,"reserved":78483636224,"free":0}},"cells":[0,1]},"1":{"numberOfAssignments":1,"memoryMap":{"hugepages-1Gi":{"total":10737418240,"systemReserved":0,"allocatable":10737418240,"reserved":0,"free":10737418240},"hugepages-2Mi":{"total":0,"systemReserved":0,"allocatable":0,"reserved":0,"free":0},"memory":{"total":99283886080,"systemReserved":1101004800,"allocatable":87445463040,"reserved":13052854272,"free":74392608768}},"cells":[0,1]}},"entries":{"84accdfa-8263-4672-9bfe-2ffd93427666":{"kube-mgrr-1":[{"numaAffinity":[0],"type":"memory","size":268435456}]},"8e7262b9-c5ae-4b2e-a0b6-5dba8c4b6d64":{"kube-mgrr-2":[{"numaAffinity":[0,1],"type":"memory","size":91268055040}]}},"checksum":3041831511}
```
4. Delete kube-mgrr-1 pod
```
[sysadmin@controller-0 pods(keystone_admin)]$ kubectl delete -f test-pod.yaml
pod "kube-mgrr-1" deleted
[sysadmin@controller-0 pods(keystone_admin)]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kube-mgrr-2 1/1 Running 0 49s
```
5. Add kube-mgrr-1 back . Pod status is TopologyAffinityError
```
[sysadmin@controller-0 pods(keystone_admin)]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kube-mgrr-1 0/1 TopologyAffinityError 0 21m
kube-mgrr-2 1/1 Running 0 22m
[sysadmin@controller-0 pods(keystone_admin)]$
[sysadmin@controller-0 pods(keystone_admin)]$ sudo cat /var/lib/kubelet/memory_manager_state
Password:
{"policyName":"Static","machineState":{"0":{"numberOfAssignments":1,"memoryMap":{"hugepages-1Gi":{"total":10737418240,"systemReserved":0,"allocatable":10737418240,"reserved":0,"free":10737418240},"hugepages-2Mi":{"total":0,"systemReserved":0,"allocatable":0,"reserved":0,"free":0},"memory":{"total":99759243264,"systemReserved":10538188800,"allocatable":78483636224,"reserved":78215200768,"free":268435456}},"cells":[0,1]},"1":{"numberOfAssignments":1,"memoryMap":{"hugepages-1Gi":{"total":10737418240,"systemReserved":0,"allocatable":10737418240,"reserved":0,"free":10737418240},"hugepages-2Mi":{"total":0,"systemReserved":0,"allocatable":0,"reserved":0,"free":0},"memory":{"total":99283886080,"systemReserved":1101004800,"allocatable":87445463040,"reserved":13052854272,"free":74392608768}},"cells":[0,1]}},"entries":{"5131157f-ca8a-4824-8fda-a57a4ceab719":{"kube-mgrr-2":[{"numaAffinity":[0,1],"type":"memory","size":91268055040}]}},"checksum":3733538882}
```
Note: if the topology manager policy is set to "best-effort" the request to respawn the Pod succeeds.
### What did you expect to happen?
When re-creating the pod that was killed, it should be able to start up again since there is enough memory available on a single node to satisfy the memory and CPU requests.
### How can we reproduce it (as minimally and precisely as possible)?
Set the memory manager policy to "Static" and topology manager policy to "restricted". On a two-NUMA-node worker node create a smallish Pod in the Guaranteed QoS class that easily fits on one NUMA node worth of memory. Create a second Pod in the Guaranteed QoS class with a big enough memory request that it cannot fit on one NUMA node. Delete the first Pod. After the memory freed up by the Pod deletion shows as available, attempt to recreate the first Pod.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
sysadmin@controller-0:~$ kubectl version
Client Version: v1.29.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2
```
</details>
### Cloud provider
<details>
Wind River Cloud Platform
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
sysadmin@controller-0:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ uname -a
Linux controller-0 6.6.0-1-rt-amd64 #1 SMP PREEMPT_RT StarlingX Debian 6.6.52-1.stx.95 (2024-10-31) x86_64 GNU/Linux
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| priority/backlog,sig/node,kind/feature,triage/accepted | medium | Critical |
2,641,825,917 | bitcoin | MuSig2 Tracking Issue | Current PR to review: #31242
- [x] libsecp:
- [x] libsecp module: https://github.com/bitcoin-core/secp256k1/pull/1479
- [x] libsecp named structs: https://github.com/bitcoin-core/secp256k1/pull/1628
- [x] libsecp subtree update: #31216
- [ ] Refactors
- [x] #31242
- [ ] #31243
- [ ] Non-default sighashes in PSBT: #31622
- [ ] Descriptor: #31244
- [ ] PSBT: #31247
- [ ] Signing: #29675 | Wallet | low | Minor |
2,641,841,002 | deno | opening a file with `append: true` then trying to lock it results in a permission denied error on windows | Repro (windows only):
```ts
const file = await Deno.open("./foo.lock", { create: true, append: true });
await file.lock();
```
Results in:
```
Uncaught PermissionDenied: Access is denied. (os error 5)
at async FsFile.lock(ext:deno_fs/30_fs.js:691:5)
```
| bug,windows | low | Critical |
2,641,905,477 | vscode | Git output containing .../ file links should be linkified | Currently only the full paths get validated links:

| feature-request,terminal-links | low | Minor |
2,641,915,210 | PowerToys | Image resizer feature request | ### Description of the new feature / enhancement
Add a couple of options to the utility.
Add a preset option to set a file size cap.
Add option to the preset or file name parameter for saving into a compressed archive folder.
### Scenario when this would be used?
Primary reason I need to resize photos is to transfer or share via email. So having the collection of photos automatically added to a archive saves me time from needing to select them and add to archive.
If a website has a size limitation then it would ensure you know your file will be as large as possible without exceeding the size constraint.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,641,923,270 | kubernetes | [KEP-4412] Create new prow job to validate the SA token for credential providers | I'm not sure if we can do it from the test suite, but we might be able to do it through the test infra job [--node-test-args](https://github.com/kubernetes/test-infra/blob/73928b23e0b0aa0b1c8afd1c313986eb4a6f3c23/config/jobs/kubernetes/sig-node/sig-node-presubmit.yaml#L3171) and removing
https://github.com/kubernetes/kubernetes/blob/6cc357046615410434b2fa99c5a229e5db2eb6d9/test/e2e_node/remote/node_e2e.go#L100
We could just add an additional job to validate plugins in SA mode if https://github.com/kubernetes/kubernetes/pull/128372#discussion_r1831791889 is not an option. With a separate job, we configure only the SA plugin mode by default, and we can set the feature gate to true only for that job.
_Originally posted by @aramase in https://github.com/kubernetes/kubernetes/pull/128372#discussion_r1832149131_
| sig/node,sig/auth,triage/accepted | low | Minor |
2,641,966,668 | flutter | ios: flutter run --release: flutter crash on first app from tutorial; log asks that an issue be filed. | ### Steps to reproduce
On macOS Sonoma 14.0, deploy target: iphone 8 running iOS 16.6
~~~
jaten@jbook ~/flutter101namer/namer $ flutter --version
flutter --version
Flutter 3.24.4 • channel stable • https://github.com/flutter/flutter.git
Framework • revision 603104015d (2 weeks ago) • 2024-10-24 08:01:25 -0700
Engine • revision db49896cf2
Tools • Dart 3.5.4 • DevTools 2.37.3
jaten@jbook ~/flutter101namer/namer $
~~~
I followed the steps in the intro tutorial, https://codelabs.developers.google.com/codelabs/flutter-codelab-first#0 to create a basic flutter app.
Try to run with: flutter run --release
~~~
jaten@jbook ~/flutter101namer/namer $ flutter run --release
flutter run --release
Launching lib/main.dart on dartag8 in release mode...
Automatically signing iOS for device deployment using specified development team in Xcode project:
KW7XZXMAF3
Running Xcode build...
Xcode build done. 70.1s
Failed to build iOS app
Could not build the precompiled application for the device.
Error (Xcode): Oops; flutter has exited unexpectedly: "Null check operator used on a null value".
Error running application on dartag8.
jaten@jbook ~/flutter101namer/namer $
~~~
### Expected results
Expected: deploy and run on iphone. Not to crash and create a log requesting a bug be filed.
### Actual results
contents of log
~/flutter101namer/namer $ cat flutter_03.log |pbcopy
~~~
Flutter crash report.
Please report a bug at https://github.com/flutter/flutter/issues.
## command
flutter assemble --no-version-check --output=/Users/jaten/flutter101namer/namer/build/ios/Release/ -dTargetPlatform=ios -dTargetFile=/Users/jaten/flutter101namer/namer/lib/main.dart -dBuildMode=release -dIosArchs=arm64 -dSdkRoot=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -dSplitDebugInfo= -dTreeShakeIcons=false -dTrackWidgetCreation=true -dDartObfuscation=false -dAction=build -dFrontendServerStarterPath= --ExtraGenSnapshotOptions= --DartDefines= --ExtraFrontEndOptions= -dCodesignIdentity=1163D43329F84306237F9C753A2DF5EAA7818ABC release_ios_bundle_flutter_assets
## exception
_TypeError: Null check operator used on a null value
```
#0 NativeAssets._buildIOS (package:flutter_tools/src/build_system/targets/native_assets.dart:291:68)
#1 NativeAssets.build (package:flutter_tools/src/build_system/targets/native_assets.dart:85:32)
<asynchronous suspension>
#2 _BuildInstance._invokeInternal (package:flutter_tools/src/build_system/build_system.dart:875:9)
<asynchronous suspension>
#3 Future.wait.<anonymous closure> (dart:async/future.dart:534:21)
<asynchronous suspension>
#4 _BuildInstance.invokeTarget (package:flutter_tools/src/build_system/build_system.dart:813:32)
<asynchronous suspension>
#5 Future.wait.<anonymous closure> (dart:async/future.dart:534:21)
<asynchronous suspension>
#6 _BuildInstance.invokeTarget (package:flutter_tools/src/build_system/build_system.dart:813:32)
<asynchronous suspension>
#7 Future.wait.<anonymous closure> (dart:async/future.dart:534:21)
<asynchronous suspension>
#8 _BuildInstance.invokeTarget (package:flutter_tools/src/build_system/build_system.dart:813:32)
<asynchronous suspension>
#9 Future.wait.<anonymous closure> (dart:async/future.dart:534:21)
<asynchronous suspension>
#10 _BuildInstance.invokeTarget (package:flutter_tools/src/build_system/build_system.dart:813:32)
<asynchronous suspension>
#11 FlutterBuildSystem.build (package:flutter_tools/src/build_system/build_system.dart:635:16)
<asynchronous suspension>
#12 AssembleCommand.runCommand (package:flutter_tools/src/commands/assemble.dart:328:32)
<asynchronous suspension>
#13 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1408:27)
<asynchronous suspension>
#14 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#15 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
<asynchronous suspension>
#16 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:420:9)
<asynchronous suspension>
#17 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#18 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5)
<asynchronous suspension>
#19 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:130:9)
<asynchronous suspension>
#20 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#21 main (package:flutter_tools/executable.dart:93:3)
<asynchronous suspension>
```
## flutter doctor
```
[✓] Flutter (Channel stable, 3.24.4, on macOS 14.0 23A344 darwin-x64, locale en-US)
• Flutter version 3.24.4 on channel stable at /Users/jaten/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 603104015d (2 weeks ago), 2024-10-24 08:01:25 -0700
• Engine revision db49896cf2
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/jaten/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915915-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15A240d
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915915-b509.11)
[✓] VS Code (version unknown)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
✗ Unable to determine VS Code version.
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-x64 • macOS 14.0 23A344 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.92
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
~~~
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:english_words/english_words.dart';
import 'package:flutter/material.dart';
import 'package:provider/provider.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return ChangeNotifierProvider(
create: (context) => MyAppState(),
child: MaterialApp(
title: 'Namer App',
theme: ThemeData(
useMaterial3: true,
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepOrange),
),
home: MyHomePage(),
),
);
}
}
class MyAppState extends ChangeNotifier {
var current = WordPair.random();
// ↓ Add this.
void getNext() {
current = WordPair.random();
notifyListeners();
}
}
class MyHomePage extends StatelessWidget {
@override
Widget build(BuildContext context) {
var appState = context.watch<MyAppState>();
return Scaffold(
body: Column(
children: [
Text('A random GREAT idea:'),
Text('The second line from the top:'),
Text('The third line from the top:'),
Text(appState.current.asLowerCase),
// ↓ Add this.
ElevatedButton(
onPressed: () {
appState.getNext(); // ← This instead of print().
print('button pressed!');
},
child: Text('Next'),
),
],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
see above.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
see above in log.
```
</details>
| tool,team-tool | low | Critical |
2,641,972,433 | godot | `TranslationServer.translate()` non-functional in editor | ### Tested versions
v4.3.stable.nixpkgs [77dcf97d8]
### System information
nixos-unstable
### Issue description
`TranslationServer` seems to not work in the editor at all, witch I partly understand, but especially it not working when being explicitly called from script is a hindrance to me right now.
### Steps to reproduce
1. check the console after opening main scene
2. run the game
EXPECTED:
prints "Es ist Mittwoch, meine Kerle!" both times.
OBSERVED:
only prints the translated message in game.
### Minimal reproduction project (MRP)
[i18-editor-bug.zip](https://github.com/user-attachments/files/17668297/i18-editor-bug.zip) | enhancement,discussion,topic:editor | low | Critical |
2,641,976,576 | vscode | Support file icons similar to treeview item icon path in terminal icon path | In `TreeViewItem`, we can provide a `resourceUri` and `ThemeIcon.File` for `iconPath` to get file icons for the `TreeViewItem`. This should also work for Terminal Icon.
See discussions in issues:
https://github.com/microsoft/vscode/issues/232439
For QuickPick: https://github.com/microsoft/vscode/issues/59826
| feature-request,terminal-tabs | low | Minor |
2,641,989,259 | next.js | CSS chunk loaded in a script tag (CSS modules) | ### Link to the code that reproduces this issue
https://github.com/maphe/css-module-reprex
### To Reproduce
The issue is visible on the prod build:
1. visit https://css-module-reprex.vercel.app/test
2. the console will show the error: `Uncaught SyntaxError: Unexpected token '.'`
3. the dom will have css loaded as script: `<script src="/_next/static/css/f2515c4387c9bb9d.css" async=""></script>`
To "reproduce" locally:
1. `npx nx build css-module-reprex --prod --skip-nx-cache`
2. look into `.next/static/css`
3. there should be one stylesheet instead of 2
### Current vs. Expected behavior
The CSS being chunked into 2 makes the app loads the second chunk as a script for some reason, hence the client component style is broken. There should only be one stylesheet loaded (or at least the second one shouldn't be inserted as a script)
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.7.0
npm: 10.1.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.7 // An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
eslint-config-next: 14.2.16
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
⚠ An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Module Resolution, Webpack
### Which stage(s) are affected? (Select all that apply)
next build (local), Vercel (Deployed)
### Additional context
After investigation, I noticed that the cause of the issue is having the `opengraph-image.tsx` file import from the library that contains the components.
```ts
import { getLauncherColor } from '@css-module-reprex/ui';
```
getting rid of this line in `opengraph-image.tsx` fixes the build. | bug,Webpack,Module Resolution | low | Critical |
2,642,002,168 | ui | [feat]: Semi-open sidebar on hover | ### Feature description
In the Arc browser, you have a sidebar that goes in a semi-opened state when hovering near it, which could be a cool feature to be built in as an option.
### Affected component/components
Sidebar
### Additional Context
https://github.com/user-attachments/assets/4f9ac46d-853f-4056-875d-28f8878c9811
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,642,013,112 | flutter | Move off of the deprecated actions/upload-artifact | Per https://github.blog/changelog/2024-04-16-deprecation-notice-v3-of-the-artifact-actions/, the actions/upload-artifact
https://github.com/actions/upload-artifact
## Action Items
I searched the flutter org, and omitted any archived repos:
- [x] Devtools
- @kenzieschmoll
- https://github.com/flutter/devtools/blob/master/.github/workflows/build.yaml#L176
- https://github.com/flutter/devtools/blob/master/.github/workflows/build.yaml#L247
- Both addressed by https://github.com/flutter/devtools/pull/8555
- [x] Engine
- @jtmcdole
- https://github.com/flutter/engine/blob/main/.github/workflows/third_party_scan.yml#L34
- https://github.com/flutter/engine/pull/56439
- [ ] news_toolkit
- https://github.com/flutter/news_toolkit/blob/main/.github/workflows/flutter_news_example.yaml#L76
- [x] platform_tests
- @Piinks
- https://github.com/flutter/platform_tests/blob/6a78997eba85765ef06c925a0ccbd7e2ad6818da/.github/workflows/scorecards-analysis.yml#L46
- https://github.com/flutter/platform_tests/pull/126
- [ ] flaux (not that this repo matters, but we should make sure that we don't have this action post-merge)
- @jtmcdole
- https://github.com/flutter/flaux/blob/26f361d85c6c15e570e754f32b0b85666c8771b7/engine/src/flutter/.github/workflows/third_party_scan.yml#L34 | team-infra,P1,triaged-infra | medium | Minor |
2,642,017,115 | kubernetes | [FG:InPlacePodVerticalScaling] Support removing requests and limits (from Burstable pods) | /kind feature
Allow resource requests and limits to be removed, as long as they do not change the pods QoS.
Removing a limit should set the associated cgroup to max.
/sig node
/priority important-longterm | sig/node,kind/feature,priority/important-longterm,triage/accepted | low | Minor |
2,642,018,371 | TypeScript | No file ending when adding import | * vscode at commit f0a00378912f27f30509fe8230abcfdf5b58a7c3
* open `src/vs/workbench/contrib/chat/browser/chatEditorOverlay.ts`
* on line 42 write `this._editor.getOption(EditorOption)` and make sure to use auto complete with import for `EditorOption`
* :bug: the import will be without `.js`
excerpt from the logs
```
Info 10477[09:55:25.163] response:
{"seq":0,"type":"response","command":"updateOpen","request_seq":303,"success":true,"body":true}
Info 10478[09:55:25.214] request:
{
"seq": 304,
"type": "request",
"command": "updateOpen",
"arguments": {
"changedFiles": [
{
"fileName": "/Users/jrieken/Code/vscode/src/vs/workbench/contrib/chat/browser/chatEditorOverlay.ts",
"textChanges": [
{
"newText": "import { EditorOption } from '../../../../editor/common/config/editorOptions';\n",
"start": {
"line": 21,
"offset": 1
},
"end": {
"line": 21,
"offset": 1
}
}
]
}
],
"closedFiles": [],
"openFiles": []
}
}
```
[tsserver.log](https://github.com/user-attachments/files/17658932/tsserver.log)
| Needs Investigation | low | Critical |
2,642,026,656 | kubernetes | [KEP-4412] Add a new KAS flag to configure allowed audiences for kubelet to request a PSAT for the purpose of image pulls | https://github.com/kubernetes/kubernetes/pull/128077 has been merged, and starting in v1.32, we are enforcing restrictions on the audiences for which the kubelet can request tokens.
Add new KAS flag that allows configuring list of permitted audiences specifically for image pulls.
/sig auth
/kind feature
/assign | kind/feature,sig/auth,triage/accepted | low | Major |
2,642,060,405 | go | go/types: add Var.Kind method and enum | **Background:** A `types.Var` represents a variable, broadly defined: a global, a local (including parameters and named results), or a struct field. Two of these cases can be discriminated thus:
- `v.IsField()` reports whether the var is a struct field.
- `v.Parent() == v.Pkg().Scope()` reports whether the var is a global.
But for the local variables, one is out of luck.
**Proposal:** We propose to add a Kind method and enum type that reports what kind of var this is.
```go
package types
type VarKind uint8
const (
PackageVar VarKind = iota
LocalVar
RecvVar
ParamVar
ResultVar
FieldVar
)
// Kind reports what kind of variable v is.
func (v *Var) Kind() VarKind
// The setter is provided for use by importers.
func (*Var) SetKind(VarKind)
```
The actual implementation would replace the existing `isField bool`, so there is no space cost.
@griesemer @findleyr @timothy-king | Proposal,Proposal-Accepted | medium | Major |
2,642,145,003 | node | Set default keepAlive options when creating HTTP/S agents | ### What is the problem this feature will solve?
Since #37184 proposal was accepted, Node's HTTP/S `globalAgent` sets the `keepAlive` option to `true` by default, with a socket timeout of 5 seconds. However, I find this is inconsistent with the observed behavior when using the `Agent` constructor, which by default returns an agent with `keepAlive` set to `false`.
### What is the feature you are proposing to solve the problem?
Set `keepAlive` to true when creating a new HTTP/S agent, as well as setting the socket timeout to 5 seconds, to equal the behavior of the `globalAgent`. However, I recognize that this change could have unintended effects for users. Additionally, the impact of setting `keepAlive` to `true` in the `globalAgent` is still under discussion (see #47130). So, please, check my alternatives too.
### What alternatives have you considered?
1. We can keep setting `keepAlive` to `false` by default, but in case it is provided, set a default socket timeout if `timeout` is not provided. I ran into this problem this week: I was using `keepAlive` without setting a `timeout` value, so sockets were not being closed after inactivity. If the server sends a FIN packet and that event is not processed by the client before sending a new request, an `ECONNRESET` error will be thrown (as observed in #47130). I found that setting `timeout` to 5 seconds, as with the `globalAgent`, greatly reduces the number of errors.
2. If, for some reason, it is discouraged to set a default `timeout` value, at least specify in [Node HTTP docs](https://nodejs.org/api/http.html#new-agentoptions) that, if it is not provided by the user, no timeout will be set (and that it could lead to possible errors as described above).
Whichever alternative is chosen, I would be more than willing to submit a PR with a solution. Thank you!
@nodejs/http (I don't know if that still works) | https,feature request | low | Critical |
2,642,178,453 | TypeScript | Error not generated for nested generic array | ### 🔎 Search Terms
error, nested, array, order
### 🕗 Version & Regression Information
- This changed between versions 4.4.4 and 4.5.5 (from playground testing - all later version exhibit this behaviour).
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.3#code/C4TwDgpgBAKuEEEoF4oDsCuBbARhATgNwBQoks8AQilDgPZ0A2EAhmiWdACIvAtyQYAdzoAeGFAgAPYBDQATAM4VISAD4qIlAHw0A3sShQ6aCAC4oCfPhYhReqMBEXL12+N0BfbSU8d4UDx8AhAwABb4EBDikjJySprqmjr6hsamFlY2dg5OdJluOY4RUQXZHlDelT7EfsTEzMCOEIrAAJZoAObhkRAAotYWQfzwPVHi8Ai6qAZGJuZQANq5zktQesAlC4uO+BjQALqVUAeeRwD055LWdPjKLMrSkADGsvK1JI3NrR3dIgP4Ia8EaCEQTVTTdZpeYWZaOVY7ABmLEYikOlQuVwI+Fu90eUhebw+9UuUAAqmhnnQsFg5E1Ni1oIwOi1HHQoGjoCjGNccXdiKSvrIfl1hHQAAqRIHBUZgkJTVKkuYZJYrfJrACsJwxAqudUFEHpACYWJKFsMQmLwVpIcL2qKRGaSA1Dd97d0ttKQaEttaFTNoSq4XlYetNr1Q1qjp4TmdiaSAMLU2loJp0DD0sJtZTM0xsjlRXm3KCI4vAE1QB6KNqdNAp4Au+kAZhYXstvpCKVQdt+YwghCgpIAymF04x5EX8AAaWgZqDyOistB0BuNt291YW2VieWQ2bpbZqyPauN1IUmtvb61d9cOugD0nY3HpRggKBtREF15tEyV+gAN2gKkaTpCAJ3TVcvmeMIIGeABrCxMFwAgaAAIgeeREVQh8rhRIRbEeG47hnB53yaRRRwwccSSuJMQNTYw5wAGSNAB2fNOSgKlrFgpon3wdIoCYpsjV1clKWTOkhLYysFCgXNWQARgADgAWiNRSOMLblJ2UbjIleV8aKgAB1W44LxWJCTA980Cgf8ABYADpnIcmccDnZcmjYN9GF4FDALuH80EUYggA
### 💻 Code
```ts
type TypeA = number;
type TypeB = boolean;
type DataTypeTwo<T extends TypeA | TypeB> = {
one: Array<{ two: Array<T> }>;
};
type DataTypeThree<T extends TypeA | TypeB> = {
one: Array<{ two: Array<{ three: Array<T> }> }>;
};
let testingThreeErr: DataTypeThree<TypeA> = {
one: [{ two: [ {three: [ true ] } ]}] // errors as expected
};
let testingTwoErr: DataTypeTwo<TypeA> = {
one: [{ two: [ false ] }] // errors as expected
};
// Uncomment these lines to see all errors
// let testingTwoPre: DataTypeTwo<TypeA> = {
// one: [{ two: [ 5 ] }]
// };
// let t2aPre: DataTypeTwo<TypeB> = testingTwoPre;
let testingThree: DataTypeThree<TypeA> = {
one: [{ two: [ {three: [ 5 ] } ]}]
};
// Comment out this line to see error for t2a assignment
let t3a: DataTypeThree<TypeB> = testingThree; // Should error, but does not
let testingTwo: DataTypeTwo<TypeA> = {
one: [{ two: [ 5 ] }]
};
let t2a: DataTypeTwo<TypeB> = testingTwo; // errors only if section above commented out
let check: number = "asdf"; // always errors, as it should
// Comment out L27 to see correct error on L32
// Uncomment L27 and lines 18-21 to see all errors correctly
// Works as expected in v4.4.4, but not any later versions
```
### 🙁 Actual behavior
No errors on lines 27 and 32 (assigning an incompatible type).
### 🙂 Expected behavior
Should error on those lines.
### Additional information about the issue
The code works (i.e. gives correct errors) in version 4.4.4 ([pg](https://www.typescriptlang.org/play/?ts=4.4.4#code/C4TwDgpgBAKuEEEoF4oDsCuBbARhATgNwBQoks8AQilDgPZ0A2EAhmiWdACIvAtyQYAdzoAeGFAgAPYBDQATAM4VISAD4qIlAHw0A3sShQ6aCAC4oCfPhYhReqMBEXL12+N0BfbSU8d4UDx8AhAwABb4EBDikjJySprqmjr6hsamFlY2dg5OdJluOY4RUQXZHlDelT7EfsTEzMCOEIrAAJZoAObhkRAAotYWQfzwPVHi8Ai6qAZGJuZQANq5zktQesAlC4uO+BjQALqVUAeeRwD055LWdPjKLMrSkADGsvK1JI3NrR3dIgP4Ia8EaCEQTVTTdZpeYWZaOVY7ABmLEYikOlQuVwI+Fu90eUhebw+9UuUAAqmhnnQsFg5E1Ni1oIwOi1HHQoGjoCjGNccXdiKSvrIfl1hHQAAqRIHBUZgkJTVKkuYZJYrfJrACsJwxAqudUFEHpACYWJKFsMQmLwVpIcL2qKRGaSA1Dd97d0ttKQaEttaFTNoSq4XlYetNr1Q1qjp4TmdiaSAMLU2loJp0DD0sJtZTM0xsjlRXm3KCI4vAE1QB6KNqdNAp4Au+kAZhYXstvpCKVQdt+YwghCgpIAymF04x5EX8AAaWgZqDyOistB0BuNt291YW2VieWQ2bpbZqyPauN1IUmtvb61d9cOugD0nY3HpRggKBtREF15tEyV+gAN2gKkaTpCAJ3TVcvmeMIIGeABrCxMFwAgaAAIgeeREVQh8rhRIRbEeG47hnB53yaRRRwwccSSuJMQNTYw5wAGSNAB2fNOSgKlrFgpon3wdIoCYpsjV1clKWTOkhLYysFCgXNWQARgADgAWiNRSOMLblJ2UbjIleV8aKgAB1W44LxWJCTA980Cgf8ABYADpnIcmccDnZcmjYN9GF4FDALuH80EUYggA)).
I assume this is to with depth limits, as it works fine for the two level case (`DataTypeTwo`) - though if the bad assignment comes after the three level case then even this error is not shown. (i.e. need to comment out line 27 in the example for the bad assignment to the `DataTypeTwo` variable to correctly error).
Even more oddly, if a bad assignment to a `DataTypeTwo` variable is made before the bad assignment to a `DataTypeThree`, the `DataTypeThree` assignment begins to error correctly. (Uncomment lines 18-21 in the example).
Possibly related to #56291. | Bug,Help Wanted | low | Critical |
2,642,212,356 | langchain | RunnableLambda Deps not Extracted Properly | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import ChatDatabricks
from langchain_community.tools.databricks import UCFunctionToolkit
def create_tool_calling_agent(
model: LanguageModelLike,
tools: Union[ToolExecutor, Sequence[BaseTool]],
agent_prompt: Optional[str] = None,
) -> CompiledGraph:
model = model.bind_tools(tools)
model_runnable = preprocessor | model
# Define the function that calls the model
def call_model(
state: AgentState,
config: RunnableConfig,
):
response = model_runnable.invoke(state, config)
return {"messages": [response]}
workflow = StateGraph(AgentState)
workflow.add_node("agent", RunnableLambda(call_model))
workflow.add_node("tools", ToolNode(tools))
workflow.set_entry_point("agent")
workflow.add_conditional_edges(
# First, we define the start node. We use agent.
# This means these are the edges taken after the agent node is called.
"agent",
# The mapping below will be used to determine which node to go to
{
# If tools, then we call the tool node.
"continue": "tools",
# END is a special node marking that the graph should finish.
"end": END,
},
)
# We now add a unconditional edge from tools to agent.
workflow.add_edge("tools", "agent")
return workflow.compile()
# Create the llm
llm = ChatDatabricks(endpoint="endpoint_id")
uc_functions = "function_names"
tools = (
UCFunctionToolkit(warehouse_id=config.get("warehouse_id"))
.include(*uc_functions)
.get_tools()
)
agent_with_raw_output = create_tool_calling_agent(llm, tools)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
- RunnableLambda [uses](https://github.com/langchain-ai/langchain/blob/14f182795312f01985344576b5199681683641e1/libs/core/langchain_core/runnables/base.py#L4475) the `get_function_nonlocals` in order to get the `deps` property.
- `get_function_nonlocals` [uses](https://github.com/langchain-ai/langchain/blob/05fd6a16a9802fc6e56d0ec65499ea4db608383d/libs/core/langchain_core/runnables/utils.py#L401) `inspect.getsource(func)` to get the source code of the function in order to extract dependencies
- However this call sometimes returns `OS Error: Unable To get Source Code`. This is causing the deps of RunnableLambda to be empty
- In the event that `inspect.getsource(func)` fails can we still get the dependencies through the closure variables
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Thu Sep 12 23:36:12 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_ARM64_T6020
> Python Version: 3.10.14 (main, May 6 2024, 14:42:37) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.41
> langchain: 0.2.16
> langchain_community: 0.2.17
> langsmith: 0.1.135
> langchain_databricks: 0.1.0
> langchain_experimental: 0.0.65
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.25
> langchain_text_splitters: 0.2.4
> langgraph: 0.2.37
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> databricks-vectorsearch: 0.40
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> huggingface-hub: 0.25.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.1
> langgraph-sdk: 0.1.33
> mlflow: 2.16.1.dev0
> numpy: 1.26.4
> openai: 1.51.2
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> scipy: 1.14.1
> sentence-transformers: 3.2.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.8.0
> tokenizers: 0.20.1
> transformers: 4.45.2
> typing-extensions: 4.12.2
``` | 🤖:bug,Ɑ: core | low | Critical |
2,642,219,365 | PowerToys | Update procedure | ### Description of the new feature / enhancement
Have a direct link to the update process from an update available notification
### Scenario when this would be used?
Whenever an version update is notified.
### Supporting information
_No response_ | Needs-Triage,Needs-Team-Response | low | Major |
2,642,249,920 | kubernetes | No overflow validation when using MilliValue() | ### What happened?
There is a function called [`MilliValue()`](https://github.com/kubernetes/kubernetes/blob/b5e64567958aae5c2e5befae000d3186384c151b/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go#L817C1-L822C1) to represent values in milli units and its comment says "this could **overflow** an int64; if that's a concern, call `Value()` first to verify the number is small enough."
```go
// staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go
// MilliValue returns the value of ceil(q * 1000); this could overflow an int64;
// if that's a concern, call Value() first to verify the number is small enough.
func (q *Quantity) MilliValue() int64 {
return q.ScaledValue(Milli)
}
```
But actually almost [all call to this function](https://github.com/search?q=repo%3Akubernetes%2Fkubernetes%20MilliValue&type=code) don't verify the number is small enough first.
Here are a few unexpected behaviors caused by this overflow.
### 1. Pod with extremely large cpu request is treated as with 0 cpu request
**Trigger**: create a pod using the following yaml file, with the cpu resource request is set as a very large quantity.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod
namespace: default
spec:
containers:
- name: test-container
image: nginx
resources:
limits:
cpu: 16Pi
requests:
cpu: 16Pi
# 16Pi = 2^4*2^50 = 2^54, 16Pi * 1000 > 2*63 > 2^63 - 1 = maxInt64, so represent this value in milli unit will cause a overflow
```
**Consequence:** The scheduler will treat this pod with 0 cpu request, so it can be scheduled to any nodes ignoring the node's cpu resource usage.
**Cause**: [type.go](https://github.com/kubernetes/kubernetes/blob/a28f14089cfa47ef9c57f9f283e1504a68f616d6/pkg/scheduler/framework/types.go#L854C1-L873C2)
The `noderesource` scheduler plugin uses `SetMaxResource()` to pre-calculate the pod resource request.
```go
// SetMaxResource compares with ResourceList and takes max value for each Resource.
func (r *Resource) SetMaxResource(rl v1.ResourceList) {
if r == nil {
return
}
for rName, rQuantity := range rl {
switch rName {
case v1.ResourceMemory:
r.Memory = max(r.Memory, rQuantity.Value())
case v1.ResourceCPU:
-> r.MilliCPU = max(r.MilliCPU, rQuantity.MilliValue())
case v1.ResourceEphemeralStorage:
r.EphemeralStorage = max(r.EphemeralStorage, rQuantity.Value())
default:
if schedutil.IsScalarResourceName(rName) {
r.SetScalar(rName, max(r.ScalarResources[rName], rQuantity.Value()))
}
}
}
}
```
1. In `SetMaxResource()`, calling `.MilliValue()` directly on a large `Quantity` without first validating it's small enough will lead to an overflow, causing `rQuantity.MilliValue()` to return a negative number.
2. `SetMaxResource()` iteratively compares the provided value with the existing one and retains the larger value (likely designed to handle cases that a resource is specified multiple times, retain the largest one).
In this case, the CPU request is specified only once, and the default request value is `0`.
Therefore, `max(0, negative_number)` returns `0`, resulting in the `noderesource` plugin considers this pod's cpu request is 0.
### 2. Pod with extremely large memory / ephemeral-storage request will trigger an unexpected warning
**Trigger**: create a pod using the following yaml file, with the ephemeral-storage resource request is a very large quantity.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx:latest
resources:
requests:
ephemeral-storage: 9Pi
limits:
ephemeral-storage: 9Pi
```
**Consequence:** When creating this pod,
It's obvious here `9Pi` is an integer, but Kubernetes will output a confusing warning message: `fractional byte value "9Pi" is invalid, must be an integer.`
**Cause**: [warning.go](https://github.com/kubernetes/kubernetes/blob/175a5b9c4690c63b7d41c3c295402269780b3a27/pkg/api/pod/warnings.go#L237C3-L248C4)
```go
if value, ok := c.Resources.Limits[api.ResourceMemory]; ok && value.MilliValue()%int64(1000) != int64(0) {
warnings = append(warnings, fmt.Sprintf("%s: fractional byte value %q is invalid, must be an integer", p.Child("resources", "limits").Key(string(api.ResourceMemory)), value.String()))
}
```
Because the quantity is very quite, calling `value.MilliValue()` will overflow and return a value that cannot be modded by 1000, triggers the warning.
### 3. Node with extremely large allocatable cpu becomes unschedulable.
**Trigger**: create a node using the following yaml file (in [KWOK](https://kwok.sigs.k8s.io/)), with the allocatable cpu resource is a very large quantity.
```yaml
apiVersion: v1
kind: Node
metadata:
name: node-1
namespace: default
status:
allocatable:
cpu: 1000Pi
capacity:
cpu: 1000Pi
```
**Consequence:** The node will be created but the scheduler thinks its allocatable cpu resource is negative, making this node unschedulable to any pod.
**Cause:** [types.go](https://github.com/kubernetes/kubernetes/blob/2caf4eddd8fc1ab7236ed608c1b548404dbc6bcf/pkg/scheduler/framework/types.go#L799C1-L820C2)
```go
// NewResource creates a Resource from ResourceList
func NewResource(rl v1.ResourceList) *Resource {
r := &Resource{}
r.Add(rl)
return r
}
// Add adds ResourceList into Resource.
func (r *Resource) Add(rl v1.ResourceList) {
if r == nil {
return
}
for rName, rQuant := range rl {
switch rName {
case v1.ResourceCPU:
-> r.MilliCPU += rQuant.MilliValue()
case v1.ResourceMemory:
r.Memory += rQuant.Value()
case v1.ResourcePods:
r.AllowedPodNumber += int(rQuant.Value())
case v1.ResourceEphemeralStorage:
r.EphemeralStorage += rQuant.Value()
default:
if schedutil.IsScalarResourceName(rName) {
r.AddScalar(rName, rQuant.Value())
}
}
}
}
```
The call to `MilliValue` at the pointed line will cause an overflow, making the node's CPU resource negative. And `nodereourse` scheduler plugin will conclude this node is ineligible for any pod.
### What did you expect to happen?
Although it's unlikely that we will encounter these extremely large values in real world, there is no constraint on the range of this value. Therefore we expect no overflow happens regardless of the user's input.
1. Since most calls to `MilliValue()` do not verify that the number is small enough first, we think the simplest fix might be let `MilliValue()` or `ScaledValue()` handle overflow automatically. For example, capping the return value at maxInt64 when overflow occurs.
2. Another possible fix can be to add a range constraint of [MaxMilliValue](https://github.com/kubernetes/kubernetes/blob/847be850000a902bcd82fb4a02bada5d948595a0/staging/src/k8s.io/apimachinery/pkg/api/resource/math.go#L49) for values will be interpreted as milli unit.
### How can we reproduce it (as minimally and precisely as possible)?
Use the yaml file above.
### Anything else we need to know?
/sig scheduling api-machinery
### Kubernetes version
1.31.2
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,sig/api-machinery,triage/accepted | low | Major |
2,642,253,491 | godot | GPU particles don't advance if `Engine.time_scale` was just set from 0 to a higher value in a frame | ### Tested versions
v4.4.dev [36e6207]
### System information
Fedora Linux 41.20241107.0 (Kinoite) on Wayland - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - integrated AMD Radeon 780M (RADV GFX1103_R1) - AMD Ryzen 7 PRO 7840HS w/ Radeon 780M Graphics (16 threads)
### Issue description
This is relevant for the step-by-step functionality introduced with #97257. If `Engine.time_scale` was set from 0 to a higher value, GPU particles won't visually advance before the `frame_post_draw` signal by the `RenderingServer`, and due to that, it makes them stay in place.
### Steps to reproduce
Play a scene with an emitting `GPUParticles2D/3D`, pause it inside the "Game" view, and keep clicking the next frame button.
### Minimal reproduction project (MRP)
[next-frame-problem.zip](https://github.com/user-attachments/files/17669689/next-frame-problem.zip) | discussion,topic:rendering | low | Minor |
2,642,296,247 | flutter | [Android][A11y] Figure out a way to enable accessibility for Android | ### Use case
Most of the non-Flutter specific Android tools rely on accessibility API if they want to interact with Flutter apps, e.g. integration test, app crawl, or play store
Currently the Flutter does not enable the accessibility by default. We have a trigger to enable accessibility if any tool attempt to query accessibility data on FlutterView, ie. FlutterView.createAccessibilityNodeInfo.
This, however, will only enable the accessibility node info on the next frame, which means the first time they query, they will get an empty accessibility tree. They have to wait for the next frame, which we don't have a reliable way to tell except a random delay, before they can query the accessibility node info again.
The tool may bail if they don't know this mechanism, even if they do, it is still kind of inconvenient.
### Proposal
We should have a way to launch an Android Flutter app with accessibility on by default either through integration test or through a prebuilt APK. | team-android | low | Major |
2,642,322,739 | electron | powerMonitor.on("thermal-state-change") and powerMonitor.on("speed-limit-change") are not correctly typed in Electron.d.ts | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.2.2
### What operating system(s) are you using?
macOS
### Operating System Version
macOS Sequoia 15.1.0
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
31.x and previous
### Expected Behavior
The `powerMonitor.on("speed-limit-change", ({limit: number}) => { ... })` and `powerMonitor.on("thermal-state-change", ({ state: string }) => { ... })` should compile successfully.
Previously, in Electron v31, the `powerMonitor` in `electron.d.ts` was typed correctly, with the following type definitions:
```
on(event: 'thermal-state-change', listener: Function): this;
...
on(event: 'speed-limit-change', listener: Function): this;
```
### Actual Behavior
We had code like this:
```
powerMonitor.on("speed-limit-change", ({ limit: number }) => {
console.log("PowerMonitor: Speed Limit Change", limit);
}));
...
powerMonitor.on("thermal-state-change", ({ state: string }) => {
console.log("PowerMonitor: Thermal State Change", state);
}));
```
TypeScript compilation fails with:
```
rc/main/power-monitor.ts:49:19 - error TS2769: No overload matches this call.
The last overload gave the following error.
Argument of type '"speed-limit-change"' is not assignable to parameter of type '"user-did-resign-active"'.
49 powerMonitor.on("speed-limit-change", ({ limit }: { limit: number }) => {
~~~~~~~~~~~~~~~~~~~~
node_modules/electron/electron.d.ts:9936:5
9936 on(event: 'user-did-resign-active', listener: () => void): this;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The last overload is declared here.
src/main/power-monitor.ts:54:19 - error TS2769: No overload matches this call.
The last overload gave the following error.
Argument of type '"thermal-state-change"' is not assignable to parameter of type '"user-did-resign-active"'.
54 powerMonitor.on("thermal-state-change", ({ state }: { state: string }) => {
~~~~~~~~~~~~~~~~~~~~~~
node_modules/electron/electron.d.ts:9936:5
9936 on(event: 'user-did-resign-active', listener: () => void): this;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The last overload is declared here.
```
On further inspection, I found that the `electron.d.ts` was typed as follows:
```
on(event: 'thermal-state-change', listener: () => void): this;
...
on(event: 'speed-limit-change', listener: () => void): this;
```
...missing the { limit } and { state } parameters.
At first... I thought maybe this was a breaking API change but didn't find anything in the release notes.
In the docs, the `speed-limit-change` and `thermal-state-change` are still called out as providing an argument `{limit: number}` and `{state: string}` respectively: https://www.electronjs.org/docs/latest/api/power-monitor#event-thermal-state-change-macos

### Testcase Gist URL
_No response_
### Additional Information
Looking through the commit history in https://github.com/electron/typescript-definitions, I wonder if this could be a regression from @MarshallOfSound 's change here: https://github.com/electron/typescript-definitions/pull/273 , as it seems the type for the `thermal-state-change` event changed from:
```
on(event: 'thermal-state-change', listener: Function): this;
```
to
```
on(event: 'thermal-state-change', listener: () => void): this;
``` | platform/macOS,documentation :notebook:,bug :beetle:,status/confirmed,32-x-y | low | Critical |
2,642,335,180 | flutter | dependabot is too chatty updating "github-actions" dependencies, stop updating patch versions | For low-traffic repos like `platform_tests` most of the commits are dependabot updating the GitHub actions dependencies.
Update [all the repositories](https://github.com/search?q=org%3Aflutter%20package-ecosystem%3A%20%22github-actions%22&type=code) to stop updating github-actions patch versions.
https://github.com/flutter/platform_tests/blob/6a78997eba85765ef06c925a0ccbd7e2ad6818da/.github/dependabot.yml#L22-L25
```diff
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
+ ignore:
+ - dependency-name: "*"
+ update-types: ["version-update:semver-patch"]
```
Example from packages:
https://github.com/flutter/packages/blob/01050136b26c5d60058b412b60b73cee4949d4ab/.github/dependabot.yml#L35-L37
And here's what the commit history looks like [platform_tests](https://github.com/flutter/platform_tests/commits/main/):

| team-infra,P2,triaged-infra | low | Minor |
2,642,353,252 | vscode | completions aren't provided when I've already typed a command | I think we should look at the last word on the line if possible and provide completions. Not sure if this is a bug or feature request 🤔 .
https://github.com/user-attachments/assets/7d1beeda-b596-4857-8785-6cc3fe100f81
| feature-request,terminal-suggest | low | Critical |
2,642,357,290 | PowerToys | Add an option in PowerToys to promote items from "Show More Options" to main context menu in Windows 11 | ### Description of the new feature / enhancement
In Windows 11, the right-click context menu often hides several useful options under a secondary "Show More Options" submenu. For advanced users, navigating to these frequently used options can be inefficient. I propose adding a feature to PowerToys that allows users to select specific items from the "Show More Options" submenu and promote them to the main context menu, making them accessible with fewer clicks. This would significantly streamline workflows by reducing the need to navigate through the extra layer.
### Scenario when this would be used?
This feature is ideal for users who frequently access options like “Properties,” “Edit with…” options, or other settings hidden under "Show More Options." By promoting these items, users save time and avoid additional clicks, which is particularly useful in workflows where quick access to specific settings or tools is essential.
What is the expected behavior of the proposed feature?
Users can use PowerToys to select specific menu items in the "Show More Options" submenu and move or pin them to the main context menu. This customization should allow quick and direct access to commonly used items, making the overall user experience faster and more efficient.
### Supporting information
As a Power User on Windows 11, my workflow often involves accessing context menu items under "Show More Options." Other users in forums and community feedback pages have expressed similar frustrations about the extra layer in the menu, which adds time to common actions. Enabling users to customize the context menu in this way would align with PowerToys’ mission to enhance productivity and usability for advanced Windows users. | Needs-Triage | low | Minor |
2,642,374,184 | godot | MultiplayerSynchronizer Causes Position Data Corruption When Using CharacterBody3D Physics | ### Tested versions
Reproducible in: 4.3.stable
### System information
Windows 10 - Godot v4.3.stable - Compatibility
### Issue description
When adding a new node to a scene and changing its authority, if that node is a CharacterBody3D node with a MultiplayerSynchronizer syncing its position across clients, the new node will default to the position of the last node of the same type spawned in the same way.
### Steps to reproduce
- Under `Debug->Custom Run Instances...` enable `Enable Multiple Instances` and set the instance count to 2.
- Launch the demo using `Run Project` (F5)
- In one window, press "Host Server" and use WASD or Arrow Keys to move the character around.
- In the other window, press "Connect to Server"
- If the nodes are using `move_and_slide`, the new player node will inexplicably teleport to the host player's position.
- To toggle the physics usage to demonstrate intended behavior, toggle comments on lines 13-15 and lines 17-18 in `player.gd`
I have also included the minimal reproduction repository on GitHub, with instructions there to reproduce the bug.
### Minimal reproduction project (MRP)
[multiplayer-sync-bug-demo.zip](https://github.com/user-attachments/files/17670275/multiplayer-sync-bug-demo.zip)
Can also be found at: https://github.com/code807/godot-multiplayer-bug-demo/tree/main | bug,topic:physics,topic:multiplayer | low | Critical |
2,642,411,302 | flutter | Write test case and automated test for touch within iOS platform views | In an attempt to fix https://github.com/flutter/flutter/issues/136244 this PR https://github.com/flutter/engine/pull/55724 caused a Very Bad regression which broke all touch within platform views (b/341930773#comment50) for a 1p customer and was therefore reverted. However, we were unable to reproduce this bug with vanilla open source Flutter, even though a workaround was found in the iOS embedder.
Screen recording of bug: go/flutter-touch-bug-screenrecording
- [ ] Figure out how to reproduce the bug outside of the customer app
- [ ] Write an automated test (tracked here https://github.com/flutter/flutter/issues/156753) | platform-ios,engine,P2,team-ios,triaged-ios | low | Critical |
2,642,450,072 | godot | Linux window title momentarily displays garbled text when opening | ### Tested versions
4.3
### System information
fedora40
### Issue description
When I open Godot using linux Chinese environment, the window title will display a garbled code:项ç®ç®¡çâ
### Steps to reproduce
none
### Minimal reproduction project (MRP)
none

| bug,platform:linuxbsd,topic:gui | low | Minor |
2,642,451,296 | storybook | [Bug]: Inconsistent Render Between Canvas and Docs | ### Describe the bug
Docs views injects parasitizing CSS on the page, impacting components rendering.
(The iframe contains code coming from both Storybook and the user's component, creating CSS conflicts.)
Canvas views does not have this problem, as the iframe is not shared between several components.
A good fix would probably be to create one iframe per component on the Docs page.
A video of the problem:
- First, in Canvas view, text color and background color are well applied.
- But then, in Docs view, background color is not applied, because some Storybook classes redefine it.
- I need to disable some CSS classes for my background color to be applied.
https://github.com/user-attachments/assets/fe0d4ff1-015e-4f54-862f-59c104e29211
### Reproduction link
https://stackblitz.com/edit/github-npfjjs?file=README.md
In case of a package installation error, you may need to:
1. Delete the `yarn.lock` file, and...
2. Execute the `$ yarn install && yarn storybook` command to launch Storybook.
### Reproduction steps
Same as the video:
1. Go to the above link (stackblitz)
2. Go to the Canvas page of the component
3. Toggle light/dark (it should work)
4. Go to the Docs page
5. Toggle light/dark (only the text will change, but the background color stays light in dark mode)
### System
Storybook Environment Info:
System:
OS: macOS 14.6.1
CPU: (8) arm64 Apple M1
Shell: 5.9 - /bin/zsh
Binaries:
Node: 21.0.0 - ~/.nvm/versions/node/v21.0.0/bin/node
npm: 10.2.0 - ~/.nvm/versions/node/v21.0.0/bin/npm <----- active
Browsers:
Chrome: 130.0.6723.117
Safari: 17.6
npmPackages:
@storybook/addon-a11y: ^8.4.1 => 8.4.1
@storybook/addon-essentials: ^8.4.1 => 8.4.1
@storybook/addon-interactions: ^8.4.1 => 8.4.1
@storybook/addon-onboarding: ^8.4.1 => 8.4.1
@storybook/addon-themes: ^8.4.1 => 8.4.1
@storybook/blocks: ^8.4.1 => 8.4.1
@storybook/test: ^8.4.1 => 8.4.1
@storybook/vue3: ^8.4.1 => 8.4.1
@storybook/vue3-vite: ^8.4.1 => 8.4.1
eslint-plugin-storybook: ^0.10.2 => 0.10.2
storybook: ^8.4.1 => 8.4.1
### Additional context
_No response_ | bug,has workaround,addon: docs | low | Critical |
2,642,483,810 | react | [Compiler Optimization]: avoid memoizing `useState` lazy initializer functions | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAMygOzgFwJYSYAEAYhBABQAOMEVYyRwmAhgLYLJg4x6YDmAXwCUTIgB1iROIW5EA2txY4EAXSIBeIgCUELXADooYBAGUcyhBQqiNAPiaSizojAQ5YxTAgDuRACosANYIYAAyhPz+eBz+EADCEGxUUCrUtPQGrBzCTkQiANx5bh4wxEoqWewIRZiCkpJwADYsYGABwaERAtGxCUkpKo5S2ZzcvAK1zjKY41C4EDAUo6LAec5oi0QUTe5EeJpEAAwF+wDUZ6cHADxEAIwIAGyr+etEOAAWeGBVHIejU1edRAgiAA
### Repro steps
1. call `useState` with the lazy state initialiser that consumes props
2. observe the function is being cached in the compiled code despite it only ever being called once
However, I'm not 100% sure this is valid React code as the docs state:
> If you pass a function as initialState, it will be treated as an initializer function. It should be pure, should take no arguments, and should return a value of any type. React will call your initializer function when initializing the component, and store its return value as the initial state. [See an example below.](https://19.react.dev/reference/react/useState#avoiding-recreating-the-initial-state)
But this does seem like a valid use-case.
### How often does this bug happen?
Every time
### What version of React are you using?
version in playground
### What version of React Compiler are you using?
version in playground | Component: Optimizing Compiler | low | Critical |
2,642,487,100 | godot | Two Identical Meshes But Only One Works with Skeleton | ### Tested versions
4.3-stable
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 6GB (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz (6 Threads)
### Issue description
I have to absolutely identical meshes, both exported from Blender, one of them was exported with the Armature and the other was was exported alone.
I **already** have the skeleton for this mesh in my Godot scene, I only need to change between meshes, but for some reason it only works if I export the whole armature + mesh from blender and not only the mesh, which is all I need. It seems like the vertex groups are getting lost somewhere or not being loaded properly.
Exported w/Armature:

Exported only Mesh:

Quick video:
https://github.com/user-attachments/assets/fd84bac0-77a3-4baa-a030-147b67e45573
### Steps to reproduce
1. Create an Armature in blender and animate a simple mesh
2. Import the full model to Godot (mesh + armature) everything works fine
3. Go back to blender and create/modify the mesh (just the mesh)
4. Import only the mesh into godot
5. Go into the skeleton and change the mesh for the new onw
6. It doesnt work
### Minimal reproduction project (MRP)
[bug-report-meshes-anim.zip](https://github.com/user-attachments/files/17670795/bug-report-meshes-anim.zip)
| discussion,documentation,needs testing,topic:import | low | Critical |
2,642,498,386 | pytorch | DTensor support for fused qkv matmul | ### 🚀 The feature, motivation and pitch
For transformer architecture (for example https://github.com/pytorch-labs/gpt-fast/blob/main/model.py#L195-L211) it tends to be most performant to merge the qkv matrices together. If you try to shard this concatenated tensor then the subsequent SDPA op won't be shared correctly since you need each column of q sharded with the corresponding columns of k and v [q1,k1,v1,...], but by default the sharding will be [q1, q2, q3...] When not using DTensor this is relatively easy to get to work: https://github.com/pytorch-labs/gpt-fast/blob/main/tp.py#L73
but for DTensor the way to enable this is really unclear. is there a way to handle this type of operation with DTensor parallelization or should we just stick to normal tensor parallel support and figure out how to get it to work with our APIs?
This is currently blocking tensor parallel support in torchAO so i wanted to centralize discussion to a single location.
### Alternatives
don't use DTensor for tensor parallel
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu | oncall: distributed,module: dtensor | low | Minor |
2,642,498,952 | rust | Compiletest path normalization fails in windows if path is inside a string | In the initial version of https://github.com/rust-lang/rust/pull/132161, a Windows CI job because the stdout contained a string containing the test path that wasn't normalized.
I'm pasting the error as described in https://github.com/rust-lang/rust/pull/132161#issuecomment-2442931343:
```diff
---- [ui] tests\ui\stable-mir-print\operands.rs stdout ----
$DIR\operands.rs
$DIR\operands.rs
\a\rust\rust\tests\ui\stable-mir-print\operands.rs
$DIR\operands.rs
Saved the actual stdout to "C:\\a\\rust\\rust\\build\\x86_64-pc-windows-msvc\\test\\ui\\stable-mir-print\\operands\\operands.stdout"
226 debug x => _1;
227 debug z => _2;
228 bb0: {
- _0 = {closure@Span { id: 105, repr: "$DIR/operands.rs:44:5: 44:19" }}(_1, _2);
+ _0 = {closure@Span { id: 105, repr: "C:/a/rust/rust/tests/ui/stable-mir-print/operands.rs:44:5: 44:19" }}(_1, _2);
231 }
232 }
``` | A-testsuite,O-windows,T-bootstrap,C-bug,S-needs-repro,A-compiletest | low | Critical |
2,642,503,852 | flutter | FlutterPluginRegistrant.xcframework from release build missing _CodeSignature and dSYMs folders | ### Steps to reproduce
When I build ios-framework for a flutter module project(template=module) that has package dependencies, the FlutterPluginRegistrant.xcframework release bundle is missing _CodeSignature and dSYMs for "ios-arm64" platform.
Command: `flutter build ios-framework --release --no-debug --no-profile`

When I am using module frameworks in native iOS app, it is crashing during below line
`GeneratedPluginRegistrant.register(with: self.flutterEngine);`
I suspect it has to do with missing code signature, since there is no dSYMs I can't see detailed crash report.
### Expected results
Flutter module release build should create _CodeSignature and dSYMs for ios-arm64 platform for bundle FlutterPluginRegistrant.xcframework
### Actual results
Flutter module release build missing _CodeSignature and dSYMs for ios-arm64 platform for bundle FlutterPluginRegistrant.xcframework
### Code sample
<details open><summary>pubspec.yaml has below dependencies:</summary>
```dart
environment:
sdk: ^3.5.3
dependencies:
amplify_flutter: ^2.0.0
amplify_auth_cognito: ^2.0.0
flutter:
sdk: flutter
cupertino_icons: ^1.0.8
google_fonts: ^6.2.1
flutter_localizations:
sdk: flutter
intl: any
provider: ^6.1.2
change_case: ^2.1.0
flutter_launcher_icons: ^0.14.1
dev_dependencies:
flutter_test:
sdk: flutter
flutter_lints: ^5.0.0
mockito: ^5.0.0
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
╰─ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.4, on macOS 15.0.1 24A348 darwin-x64, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] IntelliJ IDEA Ultimate Edition (version 2022.3.3)
[✓] IntelliJ IDEA Community Edition (version 2021.3.3)
[✓] VS Code (version 1.95.0)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!
```
</details>
| platform-ios,engine,a: existing-apps,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.27 | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.