id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,537,622,316 | rust | Inefficient codegen in builder pattern with existed copyable base value | First found in https://github.com/rust-lang/rust/issues/128081#issuecomment-2249215141
In a builder pattern code, when building with existed copyable base value, codegen isn't efficient like partial copy.
This can be reproduced on both argument value and constant value.
Full copy and patial copy:
C version: https://godbolt.org/z/z3fWsKhqE
Rust version: https://godbolt.org/z/MeGf8E5Gd
Existed value and non existed value:
C version: https://godbolt.org/z/9z3x3Kv5c
Rust version: https://godbolt.org/z/vWEaEfd3e
Real use case examaple in rust: https://godbolt.org/z/3q9jn5c1o | A-LLVM,P-medium,T-compiler,C-bug,regression-untriaged | low | Minor |
2,537,638,170 | material-ui | CSS Vars Theme not applying correct styles when using nested/forced color schemes | ### Steps to reproduce
Link to live example:
- Github: https://github.com/rossipedia/mui-css-vars
- Stackblitz: https://stackblitz.com/~/rossipedia/mui-css-vars
Steps:
Use the toggle button group to switch between light and dark mode
### Current behavior
The button labeled "LIGHT" switches its text color from black to white.
### Expected behavior
The "LIGHT" button should not change its text color when switching themes.
### Context
The [docs](https://mui.com/material-ui/customization/css-theme-variables/configuration/#forcing-a-specific-color-scheme) say you should be able to force a specific color scheme by applying the configured selector to a parent element:
> To force a specific color scheme for some part of your application, set the selector to the component or HTML element directly.
### Your environment
Browser: Arc (Chromium)
<img width="396" alt="image" src="https://github.com/user-attachments/assets/9fb64864-c1c7-413f-a9dc-5af430e690bf">
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 15.0
Binaries:
Node: 22.8.0 - ~/.local/state/fnm_multishells/34387_1726794020155/bin/node
npm: 10.8.2 - ~/.local/state/fnm_multishells/34387_1726794020155/bin/npm
pnpm: 9.6.0 - ~/.local/state/fnm_multishells/34387_1726794020155/bin/pnpm
Browsers:
Chrome: 128.0.6613.138
Edge: 129.0.2792.52
Safari: 18.0
npmPackages:
@emotion/react: ^11.13.3 => 11.13.3
@emotion/styled: ^11.13.0 => 11.13.0
@mui/material: 6.1.1 => 6.1.1
@types/react: ^18.3.8 => 18.3.8
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
typescript: ^5.6.2 => 5.6.2
```
</details>
**Search keywords**: theme cssvars | bug 🐛,package: material-ui,customization: theme | low | Major |
2,537,638,664 | PowerToys | FancyZones not working with Chrome 128.0.6613.138 | ### Microsoft PowerToys version
0.84.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
Drag Chrome Windows with Shift key.
No FancyZones appears and cannot snap Chrome Windows
### ✔️ Expected Behavior
Drag Chrome Windows with Shift key.
FancyZones appears and snap Chrome Windows
### ❌ Actual Behavior
Drag Chrome Windows with Shift key.
No FancyZones appears and cannot snap Chrome Windows
I can snap other applications, ex Oultook, Teams with FancyZones.
### Other Software
WIndows 11 23H2 | Issue-Bug,Needs-Triage | low | Minor |
2,537,641,050 | ollama | `temperature` for reader-lm should be 0 | [reader-lm](https://ollama.com/library/reader-lm) converts HTML to Markdown but with the default temperature, it hallucinates content: https://github.com/ollama/ollama/issues/6875. Setting `temperature` to zero appears to resolve this. This would be nice to have in the model config in the ollama library. | model request | low | Minor |
2,537,645,517 | ollama | an unknown error was encountered while running the model | ### What is the issue?
API returns error message which does not tell what is wrong.
Reproduce:
install model llava:13b-v1.5-q4_0
save this file:
[req-data.txt](https://github.com/user-attachments/files/17068575/req-data.txt)
run this command:
```curl -d "`cat req-data.txt`" http://localhost:11434/api/generate```
API responds with:
{"error":"an unknown error was encountered while running the model "}
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.10 | bug | low | Critical |
2,537,657,691 | godot | GPUParticles2D issue | ### Tested versions
Godot Engine v4.3.stable.official.77dcf97d8
### System information
Windows 10 OpenGL API 3.3.0 - Build 31.0.101.4887 - Compatibility - Using Device: Intel - Intel(R) UHD Graphics
### Issue description
If you set Emitting to true and One shot to true then this will set the emitter running and then sets the Emitting to false when it's finished. This results in the particle emitter not running when the emitter's scene is instantiated into the game
### Steps to reproduce
1. Create a scene with a GPUEmitter2D
2. Set the Emitter to true and One shot to true
3. Instantiate the explosion scene into the main game
### Minimal reproduction project (MRP)
NA | needs testing,topic:particles | low | Major |
2,537,664,661 | vscode | Cannot move multi-cell selection within Jupyter notebook using alt+up/down arrows | Type: <b>Bug</b>
Steps to reproduce:
1. Select a Jupyter notebook cell.
2. Hold down Shift key and press up or down arrow so that at least one other adjacent cell is included in the selection. Release all keys.
3. Hold down the alt key and press up or down arrow.
What should happen (and did happen in previous versions of VS Code):
The entire group of cells should move up or down within the notebook.
What actually happens:
Only the cell most recently added to the selection moves up or down within the notebook.
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-10850H CPU @ 2.70GHz (12 x 2712)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.59GB (16.79GB free)|
|Process Argv|--crash-reporter-id 6da36b66-3389-48e8-a214-5315ee104636|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (32)</summary>
Extension|Author (truncated)|Version
---|---|---
gitlens|eam|15.5.1
copilot|Git|1.230.0
copilot-chat|Git|0.20.1
gitlab-workflow|Git|5.11.0
git-graph|mhu|1.30.0
vscode-docker|ms-|1.29.2
vscode-dotnet-runtime|ms-|2.1.5
data-workspace-vscode|ms-|0.5.0
mssql|ms-|1.24.0
sql-bindings-vscode|ms-|0.4.0
sql-database-projects-vscode|ms-|1.4.3
autopep8|ms-|2024.0.0
debugpy|ms-|2024.10.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.9.1
datawrangler|ms-|1.8.1
jupyter|ms-|2024.8.1
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-ai|ms-|1.0.0
vscode-ai-remote|ms-|1.0.0
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-powertoys|ms-|0.1.1
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.384.0
remote-ssh|ms-|0.114.3
remote-ssh-edit|ms-|0.86.0
remote-wsl|ms-|0.88.3
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
vscode-speech|ms-|0.10.0
vscode-yaml|red|1.15.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
bdiig495:31013172
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
nativeloc1:31134641
wkspc-ranged-t:31125599
ei213698:31121563
iacca1:31138162
```
</details>
<!-- generated by issue reporter --> | feature-request,notebook-commands | low | Critical |
2,537,695,461 | godot | Reduced tween performance when FileDialog in tree | ### Tested versions
- Reproducible in 4.3.stable, 4.1.stable
### System information
Arch Linux, Ryzen 5 5600X
### Issue description
I'm currently writing a program that uses Godot as a UI toolkit, and it has quite extensive dynamic theming. As a result, there is/was a large use of tweens between theme overrides to smoothly transition between colours on UI objects:
```gdscript
tween.tween_method(func(x): $Install.add_theme_color_override("font_color",x),base_theme.get_color("font_color","ImportantButton"),Color(pallete["light"]),0.2)
tween.tween_method(func(x): $Verify.add_theme_color_override("font_color",x),base_theme.get_color("font_color","Button"),Color(pallete["lightfg"]),0.2)
tween.tween_method(func(x): $Verify.add_theme_color_override("font_hover_color",x),base_theme.get_color("font_hover_color","Button"),Color(pallete["lightfg"]),0.2)
...
tween.tween_property($BottomPanel.get_theme_stylebox("panel"),"bg_color", Color(pallete["main"]), 0.2)
tween.tween_property($TopPanel.get_theme_stylebox("panel"),"bg_color", Color(pallete["dark"]), 0.2)
tween.tween_property($SidePanel.get_theme_stylebox("panel"),"bg_color", Color(pallete["secondary"]), 0.2)
tween.tween_property($Install.get_theme_stylebox("normal"),"bg_color", Color(pallete["accent"]), 0.2)
```
This worked fine, and was generally performant - however when I added a `FileDialog` node (not doing anything with it yet, just its presence in the tree) it seemed to drastically reduce performance. The built-in profilers weren't of much help, they said that it did take >238ms, however it said >100ms without the `FileDialog`. What I found worked was the following:
```gdscript
func test(x):
print("%s,%s" % [x,Time.get_ticks_msec()])
counter += 1
...
tween.tween_method(test,0.0,1.0,0.2)
```
Since all of the tweens seemed to be consistent in their poor performance, this would see what the difference in performance was more objectively - the more values printed before 1, the smoother in theory it will be.
Main project without ``FileDialog``:
```
0.05208333581686,11222
0.16564667224884,11253
0.36402332782745,11285
0.51945328712463,11316
0.67405831813812,11346
0.8277433514595,11377
0.98136335611343,11408
1,11445
```
Main project with ``FileDialog``:
```
0.10600939393044,5991
0.66442108154297,6094
1,6194
```
The amount of values passed has decreased by over half - which leads to stuttery transitions.
### Steps to reproduce
One of the issues with this bug is that its more obvious the more complex a program is. A MRP is, ideally, the exact opposite of a complex program. What I've attached is my attempt to get as small as possible whilst still making the issue visible. This doesn't have as pronounced performance loss, however it's still measurable by the print technique as specified above.
1) Open the project
2) Run it. Use the two mandrill buttons on the side to switch between themes, and look at the console output.
3) Close it and add a FileDialog node.
4) Run it again and notice how there's less iterations, and that the smoothness in the logo and colour transitions is worse.
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/17069066/mrp.zip)
| bug,confirmed,needs testing,topic:gui,topic:animation,performance | low | Critical |
2,537,721,966 | ant-design | change the locale of ConfigProvider will make the children components totally rerender | ### Reproduction link
[](https://stackblitz.com/edit/react-bdjzlp-guzng3?file=demo.tsx)
### Steps to reproduce
change the locale of ConfigProvider between undefined and other languages
### What is expected?
don't cause the children components totally rerender
### What is actually happening?
the children components totally rerender
| Environment | Info |
| --- | --- |
| antd | 5.20.6 |
| React | 18.3.1 |
| System | windows 11 |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->

antd's default language is English, so I set the locale to undefined when the language change to English, then cause this problem | Inactive,unconfirmed | low | Major |
2,537,729,656 | rust | Is "must_use" really needed for Option::insert ? | Related to https://github.com/rust-lang/rust/pull/87196
I personally think `insert` is a more succinct and readable way than ` = Some(x)` and this seems should be a style preference and don't get why it's marked as `must_use`.
what's more weird is `Option::replace` is not marked as `must use`, so the syntax is kind of inconsistent to me. | T-lang,T-libs-api,C-discussion | low | Minor |
2,537,737,722 | deno | Crash while debugging with chrome after rebuild of code | Version: Deno 1.46.1
**Crash while debugging with chrome after rebuild of code** (see stack trace below)
**Repro steps**
1. Start deno app with command: `deno serve --inspect --watch --allow-env --allow-read --allow-net --allow-ffi --unstable-ffi --port 9092 src/main.ts`.
2. Use chrome `128.0.6613.119 (Official Build) (64-bit)` as debugger. Click the Inspect link in the chrome://inspect/#devices page for the deno remote client to open a DevTools window.
3. Add a break point and step through the code. All works.
4. Make a change to the code so that the deno process restarts.
5. Close the chrome DevTools window and open a new one by clicking the Inspect link again.
6. Step through the code again.
**Expected result**
Debug app.
**Actual result**
deno crashes with the following stack trace:
```
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: linux x86_64
Version: 1.46.1
Args: ["deno", "serve", "--inspect", "--watch", "--allow-env", "--allow-read", "--allow-net", "--allow-ffi", "--unstable-ffi", "--port", "9092", "src/main.ts"]
thread 'main' panicked at /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/deno_core-0.306.0/inspector.rs:367:16:
internal error: entered unreachable code
stack backtrace:
0: 0x5d0e11983635 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hd736fd5964392270
1: 0x5d0e119b5feb - core::fmt::write::hc6043626647b98ea
2: 0x5d0e1197d69f - std::io::Write::write_fmt::h0d24b3e0473045db
3: 0x5d0e1198340e - std::sys_common::backtrace::print::h45eb8174d25a1e76
4: 0x5d0e11984e79 - std::panicking::default_hook::{{closure}}::haf3f0170eb4f3b53
5: 0x5d0e11984c1a - std::panicking::default_hook::hb5d3b27aa9f6dcda
6: 0x5d0e11f7578c - deno::setup_panic_hook::{{closure}}::hcc1c21dbb087ca94
7: 0x5d0e119854eb - std::panicking::rust_panic_with_hook::h6b49d59f86ee588c
8: 0x5d0e1198522b - std::panicking::begin_panic_handler::{{closure}}::hd4c2f7ed79b82b70
9: 0x5d0e11983af9 - std::sys_common::backtrace::__rust_end_short_backtrace::h2946d6d32d7ea1ad
10: 0x5d0e11984f97 - rust_begin_unwind
11: 0x5d0e119b2f33 - core::panicking::panic_fmt::ha02418e5cd774672
12: 0x5d0e119b2fdc - core::panicking::panic::h6c780fb115b2371d
13: 0x5d0e13b065e5 - deno_core::inspector::JsRuntimeInspector::poll_sessions::h037919d508aeac48
14: 0x5d0e13c2bbd6 - v8_inspector__V8InspectorClient__BASE__runMessageLoopOnPause
15: 0x5d0e10f0b678 - _ZN12v8_inspector10V8Debugger18handleProgramBreakEN2v85LocalINS1_7ContextEEENS2_INS1_5ValueEEERKNSt4__Cr6vectorIiNS7_9allocatorIiEEEENS1_4base7EnumSetINS1_5debug11BreakReasonEiEENSG_13ExceptionTypeEb
16: 0x5d0e10f0b9af - _ZN12v8_inspector10V8Debugger21BreakProgramRequestedEN2v85LocalINS1_7ContextEEERKNSt4__Cr6vectorIiNS5_9allocatorIiEEEENS1_4base7EnumSetINS1_5debug11BreakReasonEiEE
17: 0x5d0e1044e645 - _ZN2v88internal5Debug12OnDebugBreakENS0_6HandleINS0_10FixedArrayEEENS0_10StepActionENS_4base7EnumSetINS_5debug11BreakReasonEiEE
18: 0x5d0e1044da20 - _ZN2v88internal5Debug5BreakEPNS0_15JavaScriptFrameENS0_6HandleINS0_10JSFunctionEEE
19: 0x5d0e10968224 - _ZN2v88internal28Runtime_DebugBreakOnBytecodeEiPmPNS0_7IsolateE
20: 0x5d0e118232b6 - Builtins_CEntry_Return2_ArgvOnStack_NoBuiltinExit
```
The code that causes the crash is related to String and regex:
```
const matches = content.matchAll(/\${{.*?}}/g) // `content` is a String
for (const m of [...matches].reverse()) { // <--- crashes when stepping over this line
```
The command line args `--allow-ffi --unstable-ffi` are used here because the dependency `jsr:@db/sqlite@0.12.0` requires it. | bug,needs info | low | Critical |
2,537,745,017 | pytorch | [Intel GPU] Lower FLOPs and Bandwidth on Arc 770 | ### 🐛 Describe the bug
Just installed pytorch nightly on my Arc 770 using
```
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu
```
Then ran benchmarks from here: https://github.com/chsasank/device-benchmarks/blob/main/benchmark.py. Note this just runs matrix multiplication and transfers data. Uses this data to measure FLOPs and Bandwidth. Here are the results from latest nightly build:
```
$ python benchmark.py --device xpu
benchmarking xpu using torch.float32
size, elapsed_time, tops
256, 0.0069244384765625, 0.004845798271379462
304, 0.002539801597595215, 0.022123353278146574
362, 0.004217314720153809, 0.02249674551121473
430, 0.008111000061035156, 0.019604734163903587
512, 0.00036540031433105467, 0.7346338945860786
608, 0.0036220550537109375, 0.12410397338921117
724, 0.003701186180114746, 0.20507124231628598
861, 0.0007224798202514649, 1.7669071525841162
1024, 0.0007423639297485351, 2.892763996127113
1217, 0.008987903594970703, 0.40109137663839733
1448, 0.0017016172409057618, 3.56840224583519
1722, 0.002590465545654297, 3.942317670710634
2048, 0.014739584922790528, 1.1655599037552458
2435, 0.013725566864013671, 2.10376198200649
2896, 0.010750722885131837, 4.518434601191407
3444, 0.01745932102203369, 4.679420503517582
4096, 0.03862507343292236, 3.55828329260477
4870, 0.05322179794311523, 4.340375840870714
5792, 0.08510377407073974, 4.5663251767539625
6888, 0.14098265171051025, 4.636003296959369
size (GB), elapsed_time, bandwidth (GB/s)
0.004194304, 0.00010600090026855469, 79.1371392011516
0.00593164, 0.00011386871337890626, 104.18384161876047
0.008388608, 0.000136566162109375, 122.85046120402235
0.01186328, 0.00016207695007324218, 146.39071125954695
0.016777216, 0.00019686222076416017, 170.4462738952743
0.023726564, 0.0002469778060913086, 192.13519121817936
0.033554432, 0.00031595230102539064, 212.40188402554782
0.047453132, 0.00041806697845458984, 227.01210306259253
0.067108864, 0.0005616903305053711, 238.95324649658812
0.094906264, 0.0007654428482055664, 247.9774008536091
0.134217728, 0.0010532855987548829, 254.85533678360812
0.189812528, 0.0014489173889160156, 262.00600455489763
0.268435456, 0.001987719535827637, 270.0938952015986
0.37962506, 0.002787351608276367, 272.39122532858437
```
This is significantly lower than my own benchmarks on IPEX from a few months ago (used to get 15 fp32 tflops and 512 GB/s).
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20240919+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-triton-xpu==3.1.0+91b14bf559
[pip3] torch==2.6.0.dev20240919+xpu
[pip3] torchaudio==2.5.0.dev20240919+xpu
[pip3] torchvision==0.20.0.dev20240919+xpu
[pip3] triton==2.2.0
[conda] pytorch-triton-xpu 3.1.0+91b14bf559 pypi_0 pypi
[conda] torch 2.6.0.dev20240919+xpu pypi_0 pypi
[conda] torchaudio 2.5.0.dev20240919+xpu pypi_0 pypi
[conda] torchvision 0.20.0.dev20240919+xpu pypi_0 pypi
cc @msaroufim @gujinghui @EikanWang @fengyuan14 @guangyey | module: performance,triaged,module: xpu | low | Critical |
2,537,771,432 | ui | [feat]: Swiper Slider Component | ### Feature description
Wanted to have optimized Swiper Slider component to be useful and also a good performance on mobile because I've experience some performance issue on Swiper/React on mobile devices
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Major |
2,537,780,942 | deno | require('.') does not work | deno version: 2.0.0-rc.4
when using prisma client, a error throws: `error: Could not load '.' `
`require('.')` should work as `require('./index.js')` | bug,node compat | low | Critical |
2,537,788,579 | pytorch | Composite Compliance Tensor ut check fails in privateuse1 device | ### 🐛 Describe the bug
I perform Composite Compliance Tensor ut check on privateuse1 device and it fails and return unknown format. I find that this tensor will be constructed in python_variable.cpp THPVariable_make_subclass. Here the tensor can not be constructed based on device.
### Versions
pytorch 2.1+
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens | needs reproduction,triaged,module: PrivateUse1 | low | Critical |
2,537,789,548 | rust | `Send` bound of trait object make unexpected async block is not general enough error | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=4f2133cbf38d27366a8edd106b5485e5
I expected to see this happen: Same as below, compile pass:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=69e324c356bd126e575a971a7cbbf2fa
Instead, this happened:
```
error: implementation of `From` is not general enough
--> src/lib.rs:44:9
|
44 | / Box::pin(async move {
45 | | let response = self.send_request(request).await;
46 | | match response {
47 | | Ok(response) => todo!(),
... |
51 | | }
52 | | })
| |__________^ implementation of `From` is not general enough
|
= note: `Box<(dyn std::error::Error + Send + Sync + 'static)>` must implement `From<Box<(dyn std::error::Error + Send + Sync + '0)>>`, for any lifetime `'0`...
= note: ...but it actually implements `From<Box<(dyn std::error::Error + Send + Sync + 'static)>>`
error: could not compile `playground` (lib) due to 1 previous error; 1 warning emitted
```
| A-diagnostics,A-trait-system,C-bug,A-async-await | low | Critical |
2,537,864,491 | deno | `using` in a `for... of` loop fails to call `Symbol.dispose` method (+ same problem for `async` versions) | Version: Deno 1.46.3
I initially thought this was a bug with TS, but compiling TS via the playground and running it in Chrome works correctly, as does running it in Bun.
Using `using` within a `for... of` loop fails to call the `Symbol.dispose` method. The same problem occurs when using awaiting using using `await using` using `Symbol.asyncDispose`:
```ts
class Disposable {
disposed = false;
[Symbol.dispose]() {
this.disposed = true
}
}
const disposables = [new Disposable()]
for (using _ of disposables) {/* ... */}
if (disposables[0]!.disposed) {
console.log("✅ dispose ok")
} else {
console.error("💥 failed to dispose")
}
class AsyncDisposable {
disposed = false;
[Symbol.asyncDispose]() {
this.disposed = true
}
}
const asyncDisposables = [new AsyncDisposable()]
for (await using _ of asyncDisposables) {/* ... */}
if (asyncDisposables[0]!.disposed) {
console.log("✅ async dispose ok")
} else {
console.error("💥 failed to async dispose")
}
```
Keywords: using, using using, using await using, awaiting using await using | bug,upstream,swc | low | Critical |
2,537,924,477 | pytorch | [channels_last] Segmentation fault with aten.convolution | ### 🐛 Describe the bug
Segmentation fault is observed with aten.convolution for inputs in channels_last format. This is observed with both eager and compile mode.
The example works fine if channels_last format is not enabled.
### Error logs
```
Internal Error: Received signal - Segmentation fault
Segmentation fault (core dumped)
```
### Minified repro
```
import torch
def compute(input, input2):
clone = torch.ops.aten.clone.default(input2, memory_format = torch.channels_last)
convolution = torch.ops.aten.convolution.default(clone, input, None, [2, 13], [1, 14], [1, 1], False, [0, 0], 67)
convolution_backward = torch.ops.aten.convolution_backward.default(convolution, clone, input, [0], [2, 13], [1, 14], [1, 1], False, [0, 0], 67, [True, True, False])
print(convolution_backward)
input = torch.randn(134, 67, 2, 1)
input2 = torch.randn(18, 4489, 23, 12)
input = input.contiguous(memory_format=torch.channels_last)
compute(input, input2)
```
### Versions
```
PyTorch version: 2.4.0a0+git9cc5232
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5
CMake version: version 3.28.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @chauhang @penguinwu | high priority,module: crash,module: cpu,module: convolution,triaged,oncall: cpu inductor | low | Critical |
2,537,974,528 | excalidraw | No Text Preview while writing | No doubt the the excalidraw platform is great , but make the writing preview on for black color as it is the most used and the classic color to write , while typing i cannot see what i am typing or if i make typo then the only way to know and correct it is to see that after writing , this happens every often , just make the text preview aa blac in color while writing in real time for black color | bug | low | Minor |
2,537,986,940 | flutter | Pop scope not working with the go_router initial routes | ### Steps to reproduce
create 2 pages with one as initial route .
add PopScope for both pages ,assign canPop as false and use a confirmation dialog ,
for the second page when we use device back gesture it will work as expected ,but for the initial route the app closes without showing any dialog .it even wont reach the onPopInvoked with result .
The issue came to appear after 12 version of go router
### Expected results
Also show the dialog even if the route is initial one
### Actual results
The app closes without any dialog
### Code sample
```
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerDelegate: _router.routerDelegate,
routeInformationParser: _router.routeInformationParser,
);
}
final GoRouter _router = GoRouter(
initialLocation: '/',
routes: [
GoRoute(
path: '/',
builder: (context, state) => const FirstPage(),
),
GoRoute(
path: '/second',
builder: (context, state) => const SecondPage(),
),
],
);
}
class FirstPage extends StatelessWidget {
const FirstPage({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return PopScope(
canPop: false, // Prevents popping
onPopInvoked: () async {
final shouldExit = await _showExitDialog(context);
return shouldExit ?? false;
},
child: Scaffold(
appBar: AppBar(title: const Text('First Page')),
body: Center(
child: ElevatedButton(
onPressed: () {
context.go('/second');
},
child: const Text('Go to Second Page'),
),
),
),
);
}
Future<bool?> _showExitDialog(BuildContext context) {
return showDialog<bool>(
context: context,
builder: (context) {
return AlertDialog(
title: const Text("Exit Confirmation"),
content: const Text("Do you really want to exit?"),
actions: [
TextButton(
onPressed: () => Navigator.of(context).pop(false),
child: const Text("No"),
),
TextButton(
onPressed: () => Navigator.of(context).pop(true),
child: const Text("Yes"),
),
],
);
},
);
}
}
class SecondPage extends StatelessWidget {
const SecondPage({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return opScope(
canPop:false,
onPopInvokedWithResult: (bool didPop, Object? object) async {
if (didPop) {
return;
}
final shouldExit = await _showExitDialog(context);
if ( shouldExit){
// do stuff like poping screen or some other functions
}
},
child: Scaffold(
appBar: AppBar(title: const Text('Second Page')),
body: Center(
child: ElevatedButton(
onPressed: () {
context.go('/');
},
child: const Text('Go back to First Page'),
),
),
),
);
}
Future<bool?> _showExitDialog(BuildContext context) {
return showDialog<bool>(
context: context,
builder: (context) {
return AlertDialog(
title: const Text("Exit Confirmation"),
content: const Text("Do you really want to exit?"),
actions: [
TextButton(
onPressed: () => Navigator.of(context).pop(false),
child: const Text("No"),
),
TextButton(
onPressed: () => Navigator.of(context).pop(true),
child: const Text("Yes"),
),
],
);
},
);
}
}
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
[√] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4169], locale en-IN)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[√] Chrome - develop for the web
[X] Visual Studio - develop Windows apps
X Visual Studio not installed; this is necessary to develop Windows apps.
Download at https://visualstudio.microsoft.com/downloads/.
Please install the "Desktop development with C++" workload, including all of its default components
[√] Android Studio (version 2024.1)
[√] VS Code (version 1.93.0)
[√] Connected device (4 available)
[√] Network resources
| package,has reproducible steps,P1,p: go_router,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.26 | medium | Major |
2,538,004,382 | transformers | Kolmogorov–Arnold Transformer | ### Model description
The Kolmogorov–Arnold Transformer (KAT) replaces the standard MLP layers in transformers with Kolmogorov-Arnold Network (KAN) layers, improving the model's expressiveness and overall performance.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The code is at https://github.com/Adamdad/kat and paper at https://arxiv.org/abs/2409.10594 | New model,Feature request | low | Major |
2,538,032,802 | ollama | "/show parameters" command causes crashes when running Qwen 2.5 models, on version 0.3.11 | ### What is the issue?
This only happens after changing the parameters through /set parameter command.
Here's an example:
PS H:\ztmp> ollama run qwen2.5
>>> /show parameters
No parameters were specified for this model.
>>> /set parameter top_k 1
Set parameter 'top_k' to '1'
>>> /show parameters
error: couldn't get model
Error: something went wrong, please see the ollama server logs for details
PS H:\ztmp>
This is on windows 11.
Here's the error message from the ollama serve terminal tab:
2024/09/20 08:47:43 [Recovery] 2024/09/20 - 08:47:43 panic recovered:
assignment to entry in nil map
runtime/map_faststr.go:205 (0x7a93ba)
github.com/ollama/ollama/server/routes.go:807 (0x12cc57e)
github.com/ollama/ollama/server/routes.go:732 (0x12cb497)
github.com/gin-gonic/gin@v1.10.0/context.go:185 (0x1287cca)
github.com/ollama/ollama/server/routes.go:1076 (0x12d0c14)
github.com/gin-gonic/gin@v1.10.0/context.go:185 (0x1295d39)
github.com/gin-gonic/gin@v1.10.0/recovery.go:102 (0x1295d27)
github.com/gin-gonic/gin@v1.10.0/context.go:185 (0x1294e64)
github.com/gin-gonic/gin@v1.10.0/logger.go:249 (0x1294e4b)
github.com/gin-gonic/gin@v1.10.0/context.go:185 (0x1294291)
github.com/gin-gonic/gin@v1.10.0/gin.go:633 (0x1293d00)
github.com/gin-gonic/gin@v1.10.0/gin.go:589 (0x1293831)
net/http/server.go:2688 (0xafaecc)
net/http/server.go:3142 (0xafc6cd)
net/http/server.go:2044 (0xaf79c7)
runtime/asm_amd64.s:1695 (0x8026a0)
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.11 | bug | low | Critical |
2,538,174,458 | vscode | When I use the search function, I get stuck |
Type: <b>Performance Issue</b>
Every time I use the search function, there is a lag.
vscode version : latest
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i7-1370P (20 x 2189)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.69GB (7.38GB free)|
|Process Argv|--file-uri file:///d%3A/workspaces/java.workspace/mycloud.code-workspace --crash-reporter-id ed426ba1-119b-4d66-9646-cbf95958db21|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
0 142 46100 code main
0 40 9000 utility-network-service
0 27 11856 crashpad-handler
0 321 22396 window [1] (mycloud (工作区) - Visual Studio Code [管理员])
0 113 32436 fileWatcher [1]
0 118 39756 shared-process
0 243 55564 extensionHost [1]
0 2 11964 "d:\Program Files\Git\cmd\git.exe" for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) --ignore-case refs/heads/origin/master refs/remotes/origin/master
0 5 12284 "d:\Program Files\Git\cmd\git.exe" for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) --ignore-case refs/heads/master refs/remotes/master
0 10 29244 C:\windows\system32\conhost.exe 0x4
0 3 50124 git.exe for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) --ignore-case refs/heads/master refs/remotes/master
0 397 22916 c:\Users\dean\.vscode\extensions\ms-dotnettools.csharp-2.45.25-win32-x64\.roslyn\Microsoft.CodeAnalysis.LanguageServer.exe --logLevel Information --razorSourceGenerator c:\Users\dean\.vscode\extensions\ms-dotnettools.csharp-2.45.25-win32-x64\.razor\Microsoft.CodeAnalysis.Razor.Compiler.dll --razorDesignTimePath c:\Users\dean\.vscode\extensions\ms-dotnettools.csharp-2.45.25-win32-x64\.razor\Targets\Microsoft.NET.Sdk.Razor.DesignTime.targets --starredCompletionComponentPath c:\Users\dean\.vscode\extensions\ms-dotnettools.vscodeintellicode-csharp-2.1.11-win32-x64\components\starred-suggestions\platforms\win32-x64\node_modules\@vsintellicode\starred-suggestions-csharp.win32-x64 --devKitDependencyPath c:\Users\dean\.vscode\extensions\ms-dotnettools.csharp-2.45.25-win32-x64\.roslynDevKit\Microsoft.VisualStudio.LanguageServices.DevKit.dll --sessionId 0f383968-c111-41eb-8687-25b7e1e3bfea1726819600956 --extension c:\Users\dean\.vscode\extensions\ms-dotnettools.csharp-2.45.25-win32-x64\.xamlTools\Microsoft.VisualStudio.DesignTools.CodeAnalysis.dll --extension c:\Users\dean\.vscode\extensions\ms-dotnettools.csharp-2.45.25-win32-x64\.xamlTools\Microsoft.VisualStudio.DesignTools.CodeAnalysis.Diagnostics.dll --telemetryLevel all --extensionLogDirectory c:\Users\dean\AppData\Roaming\Code\logs\20240920T160640\window1\exthost\ms-dotnettools.csharp
0 85 28980 electron-nodejs (start-server.js )
0 5 31052 "d:\Program Files\Git\cmd\git.exe" config --local branch.master.vscode-merge-base
0 8 42152 git.exe config --local branch.master.vscode-merge-base
0 10 52588 C:\windows\system32\conhost.exe 0x4
0 4 39776 electron-nodejs (config.js )
0 131 12168 electron-nodejs (config.js )
0 74 56548 "c:\Users\dean\.vscode\extensions\ms-dotnettools.csdevkit-1.10.18-win32-x64\components\vs-green-server\platforms\win32-x64\node_modules\@microsoft\servicehub-controller-net60.win32-x64/Microsoft.ServiceHub.Controller" 7c4565113de718d69026d5011a84f5d35b408537ef2e25d61c4721bd83b19c44 /ControllerCooldownTimeout:30000 "/TelemetrySession:{\"TelemetryLevel\":\"all\",\"IsOptedIn\":false,\"HostName\":\"Default\",\"AppInsightsInstrumentationKey\":null,\"AsimovInstrumentationKey\":null,\"CollectorApiKey\":\"0c6ae279ed8443289764825290e4f9e2-1a736e7c-1324-4338-be46-fc2a58ae4d14-7255\",\"AppId\":1010,\"UserId\":\"37499a5e-afdf-4ea1-8111-3f0e2227d2bc\",\"Id\":\"0f383968-c111-41eb-8687-25b7e1e3bfea1726819600956\",\"ProcessStartTime\":133712932139798191,\"SkuName\":null,\"VSExeVersion\":null,\"BucketFiltersToEnableWatsonForFaults\":[{\"AdditionalProperties\":[{\"Key\":\"DumpType\",\"Value\":\"Heap\"}],\"Id\":\"81b2b10a-a4da-4266-847a-b7d88165550b\",\"WatsonEventType\":\"VisualStudioNonFatalErrors2\",\"BucketParameterFilters\":[\"(?i)microsoft.visualstudio.code.servicehost.exe\",null,\"(?i)vs.unittest.testwindowhost.fault_2\",\"(?i)streamjsonrpc.connectionlostexception\",\"(?i)system.private.corelib\",\"(?i)system.runtime.exceptionservices.exceptiondispatchinfo.throw\",null,null,null,null]}],\"BucketFiltersToAddDumpsToFaults\":[{\"AdditionalProperties\":[{\"Key\":\"DumpType\",\"Value\":\"Heap\"}],\"Id\":\"81b2b10a-a4da-4266-847a-b7d88165550b\",\"WatsonEventType\":\"VisualStudioNonFatalErrors2\",\"BucketParameterFilters\":[\"(?i)microsoft.visualstudio.code.servicehost.exe\",null,\"(?i)vs.unittest.testwindowhost.fault_2\",\"(?i)streamjsonrpc.connectionlostexception\",\"(?i)system.private.corelib\",\"(?i)system.runtime.exceptionservices.exceptiondispatchinfo.throw\",null,null,null,null]}]}"
0 115 39952 "c:\Users\dean\.vscode\extensions\ms-dotnettools.csdevkit-1.10.18-win32-x64\components\vs-green-server\platforms\win32-x64\node_modules\@microsoft\visualstudio-code-servicehost.win32-x64/Microsoft.VisualStudio.Code.ServiceHost.exe" dotnet$C94B8CFE-E3FD-4BAF-A941-2866DBB566FE net.pipe://56548F1E2AFB13448F0C0CAE69E96427B60D8 "/TelemetrySession:{\"TelemetryLevel\":\"all\",\"IsOptedIn\":false,\"HostName\":\"Default\",\"AppInsightsInstrumentationKey\":null,\"AsimovInstrumentationKey\":null,\"CollectorApiKey\":\"0c6ae279ed8443289764825290e4f9e2-1a736e7c-1324-4338-be46-fc2a58ae4d14-7255\",\"AppId\":1010,\"UserId\":\"37499a5e-afdf-4ea1-8111-3f0e2227d2bc\",\"Id\":\"0f383968-c111-41eb-8687-25b7e1e3bfea1726819600956\",\"ProcessStartTime\":133712932139798191,\"SkuName\":null,\"VSExeVersion\":null,\"BucketFiltersToEnableWatsonForFaults\":[{\"AdditionalProperties\":[{\"Key\":\"DumpType\",\"Value\":\"Heap\"}],\"Id\":\"81b2b10a-a4da-4266-847a-b7d88165550b\",\"WatsonEventType\":\"VisualStudioNonFatalErrors2\",\"BucketParameterFilters\":[\"(?i)microsoft.visualstudio.code.servicehost.exe\",null,\"(?i)vs.unittest.testwindowhost.fault_2\",\"(?i)streamjsonrpc.connectionlostexception\",\"(?i)system.private.corelib\",\"(?i)system.runtime.exceptionservices.exceptiondispatchinfo.throw\",null,null,null,null]}],\"BucketFiltersToAddDumpsToFaults\":[{\"AdditionalProperties\":[{\"Key\":\"DumpType\",\"Value\":\"Heap\"}],\"Id\":\"81b2b10a-a4da-4266-847a-b7d88165550b\",\"WatsonEventType\":\"VisualStudioNonFatalErrors2\",\"BucketParameterFilters\":[\"(?i)microsoft.visualstudio.code.servicehost.exe\",null,\"(?i)vs.unittest.testwindowhost.fault_2\",\"(?i)streamjsonrpc.connectionlostexception\",\"(?i)system.private.corelib\",\"(?i)system.runtime.exceptionservices.exceptiondispatchinfo.throw\",null,null,null,null]}]}"
0 10 12544 C:\windows\system32\conhost.exe 0x4
0 101 43768 "c:\Users\dean\.vscode\extensions\ms-dotnettools.csdevkit-1.10.18-win32-x64\components\vs-green-server\platforms\win32-x64\node_modules\@microsoft\visualstudio-code-servicehost.win32-x64\Microsoft.VisualStudio.Code.ServiceHost.exe" dotnet.projectSystem$C94B8CFE-E3FD-4BAF-A941-2866DBB566FE net.pipe://56548F1E2AFB13448F0C0CAE69E96427B60D8 "/TelemetrySession:{\"TelemetryLevel\":\"all\",\"IsOptedIn\":false,\"HostName\":\"Default\",\"AppInsightsInstrumentationKey\":null,\"AsimovInstrumentationKey\":null,\"CollectorApiKey\":\"0c6ae279ed8443289764825290e4f9e2-1a736e7c-1324-4338-be46-fc2a58ae4d14-7255\",\"AppId\":1010,\"UserId\":\"37499a5e-afdf-4ea1-8111-3f0e2227d2bc\",\"Id\":\"0f383968-c111-41eb-8687-25b7e1e3bfea1726819600956\",\"ProcessStartTime\":133712932139798191,\"SkuName\":null,\"VSExeVersion\":null,\"BucketFiltersToEnableWatsonForFaults\":[{\"AdditionalProperties\":[{\"Key\":\"DumpType\",\"Value\":\"Heap\"}],\"Id\":\"81b2b10a-a4da-4266-847a-b7d88165550b\",\"WatsonEventType\":\"VisualStudioNonFatalErrors2\",\"BucketParameterFilters\":[\"(?i)microsoft.visualstudio.code.servicehost.exe\",null,\"(?i)vs.unittest.testwindowhost.fault_2\",\"(?i)streamjsonrpc.connectionlostexception\",\"(?i)system.private.corelib\",\"(?i)system.runtime.exceptionservices.exceptiondispatchinfo.throw\",null,null,null,null]}],\"BucketFiltersToAddDumpsToFaults\":[{\"AdditionalProperties\":[{\"Key\":\"DumpType\",\"Value\":\"Heap\"}],\"Id\":\"81b2b10a-a4da-4266-847a-b7d88165550b\",\"WatsonEventType\":\"VisualStudioNonFatalErrors2\",\"BucketParameterFilters\":[\"(?i)microsoft.visualstudio.code.servicehost.exe\",null,\"(?i)vs.unittest.testwindowhost.fault_2\",\"(?i)streamjsonrpc.connectionlostexception\",\"(?i)system.private.corelib\",\"(?i)system.runtime.exceptionservices.exceptiondispatchinfo.throw\",null,null,null,null]}]}"
0 10 49368 C:\windows\system32\conhost.exe 0x4
0 10 28148 C:\windows\system32\conhost.exe 0x4
0 5 41868 "d:\Program Files\Git\cmd\git.exe" status -z -uall
0 9 39804 git.exe status -z -uall
0 10 55792 C:\windows\system32\conhost.exe 0x4
0 87 42664 electron-nodejs (languageserver.js )
0 2 54096 "d:\Program Files\Git\cmd\git.exe" rev-parse --git-dir --git-common-dir
0 2 40092 C:\windows\system32\conhost.exe 0xffffffff -ForceV1
0 209 54592 c:\Users\dean\.vscode\extensions\ms-dotnettools.csharp-2.45.25-win32-x64\.razor\rzls.exe --logLevel 2 --DelegateToCSharpOnDiagnosticPublish true --UpdateBuffersForClosedDocuments true --SingleServerCompletionSupport true --telemetryLevel all --sessionId 0f383968-c111-41eb-8687-25b7e1e3bfea1726819600956 --telemetryExtensionPath c:\Users\dean\.vscode\extensions\ms-dotnettools.csharp-2.45.25-win32-x64\.razorDevKit\Microsoft.VisualStudio.DevKit.Razor.dll
0 5 57108 "d:\Program Files\Git\cmd\git.exe" -c core.longpaths=true rev-list --count --left-right refs/heads/master...refs/remotes/origin/master
0 10 44316 C:\windows\system32\conhost.exe 0x4
0 178 57000 gpu-process
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (mycloud (工作区) - Visual Studio Code [管理员])
| Folder (mycloud): more than 20326 files
| File types: py(4338) pyc(4195) png(1215) xml(834) java(832) class(696)
| js(469) jpg(429) pyi(284) ts(230)
| Conf files: package.json(11) settings.json(6) dockerfile(4)
| tsconfig.json(4) webpack.config.js(3)
| Folder (syzygroup): more than 20214 files
| File types: java(3950) cs(1885) xml(1556) vue(1421) png(1319) js(1122)
| html(1073) class(722) cshtml(186) svg(183)
| Conf files: csproj(32) package.json(20) dockerfile(5) settings.json(4)
| gulp.js(4) sln(2)
| Folder (zxdp): 16519 files
| File types: java(4189) class(3284) png(1368) xml(1128) html(936)
| js(583) vue(502) jpg(368) css(356) meta(210)
| Conf files: dockerfile(7) settings.json(3) package.json(2)
| webpack.config.js(1)
| Folder (syzy-exhibition-digitizing): 570 files
| File types: xml(116) java(73) class(65) yml(13) properties(11) jar(9)
| md(7) gitignore(6) sha1(5) log(5)
| Conf files:;
```
</details>
<details><summary>Extensions (37)</summary>
Extension|Author (truncated)|Version
---|---|---
vue-peek|dar|1.0.2
vscode-markdownlint|Dav|0.56.0
vscode-eslint|dba|3.0.10
githistory|don|0.6.20
json-tools|eri|1.0.2
prettier-vscode|esb|11.0.0
auto-close-tag|for|0.5.15
go|gol|0.42.1
gc-excelviewer|Gra|4.2.62
rest-client|hum|0.25.1
prettier-sql-vscode|inf|1.6.0
hive-sql|jos|0.0.4
markdown-shortcuts|mdi|0.12.0
dotenv|mik|1.0.1
vscode-language-pack-zh-hans|MS-|1.93.2024091109
csdevkit|ms-|1.10.18
csharp|ms-|2.45.25
vscode-dotnet-runtime|ms-|2.1.5
vscodeintellicode-csharp|ms-|2.1.11
debugpy|ms-|2024.10.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.9.1
remote-wsl|ms-|0.88.3
sqltools|mtx|0.28.3
vscode-vue2-snippets|Nic|2.3.0
indent-rainbow|ode|8.3.1
minapp-vscode|qiu|2.4.13
vs-code-prettier-eslint|rve|6.0.0
vue-vscode-snippets|sdr|3.1.1
markdown-preview-enhanced|shd|0.8.14
vscode-stylelint|sty|1.4.0
intellicode-api-usage-examples|Vis|0.2.8
vscode-concourse|vmw|1.55.0
vscode-manifest-yaml|vmw|1.55.0
vscode-icons|vsc|12.9.0
volar|Vue|2.1.6
markdown-all-in-one|yzh|3.6.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
01bff139:31013167
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
nativeloc2:31134642
wkspc-ranged-c:31125598
fje88620:31121564
iacca1:31138162
```
</details>
<!-- generated by issue reporter --> | bug,search,performance | low | Critical |
2,538,210,484 | rust | `#[rustc_default_body_unstable]` silently ignores `implied_by` and `soft` | `#[rustc_default_body_unstable]` is treated the same as `#[unstable]` by `parse_unstability` in `rustc_attr`, so it accepts the same flags. However, the stability annotator in `rustc_passes` doesn't collect implications from it and `eval_default_body_stability` in `rustc_middle` doesn't check feature-gate implications. Likewise, it accepts `soft` but `default_body_is_unstable` in `rustc_hir_analysis` ignores it.
I think I can fix this (I don't imagine it's high-priority) but I'd like to check first whether it's intentionally not handled. Currently I don't think anything in `library/` or `src/` uses or acknowledges `#[rustc_default_body_unstable]`/`rustc_attr::DefaultBodyStability`. | A-attributes,T-compiler,C-bug | low | Minor |
2,538,214,814 | godot | Unnecessary "Viewport Texture must be set to use it." errors while trying to set said ViewportTexture | ### Tested versions
- Reproducible in 4.3.stable
- Reproducible in 4.4.dev2 and `master` (0a4aedb36065f66fc7e99cb2e6de3e55242f9dfb), i.e. both before and after #97029
### System information
Fedora Linux 40 (KDE Plasma) - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 7600M XT (RADV NAVI33) - AMD Ryzen 7 7840HS w/ Radeon 780M Graphics (16 Threads)
### Issue description
While testing the MRP from #96941 which uses ViewportTextures, I noticed some cases which don't seem to be well handled yet, even after #97029 (cc @Hilderin).

In the "Test 0/MeshInstance3D", for the Surface Material Override > Albedo > Texture, if you assign "New ViewportTexture", it spams the error 3 times, and the dialog that normally shows to let you select a SubViewport flashes and disappears right away.
```
ERROR: Viewport Texture must be set to use it.
at: _err_print_viewport_not_set (./scene/main/viewport.cpp:171)
ERROR: Viewport Texture must be set to use it.
at: _err_print_viewport_not_set (./scene/main/viewport.cpp:171)
ERROR: Viewport Texture must be set to use it.
at: _err_print_viewport_not_set (./scene/main/viewport.cpp:171)
```
In the "Test 1/MeshInstance3D", for the Surface Material Override > Shader Parameters > Texture, if you assign "New ViewportTexture", it prints the error *once*, but properly shows the SubViewport selection dialog, which works fine to pick a SubViewport.
On the other hand for that MRP, #97029 did solve two occurrences of the error when opening the scene (visible in 4.3.stable or 4.4.dev2, but not in latest `master`).
### Steps to reproduce
See above.
### Minimal reproduction project (MRP)
https://github.com/rakkarage/testtextureshaderparameter
Zipped version in case that repo gets updated or removed:
[testtextureshaderparameter.zip](https://github.com/user-attachments/files/17072306/testtextureshaderparameter.zip) | bug,topic:core,topic:editor | low | Critical |
2,538,240,242 | stable-diffusion-webui | [Bug]: IPEX - Native API returns -997 (Command failed to enqueue/execute) | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
When generating images, sometimes I encounter a Native API return -997 which is an IPEX code for a command failed to enqueue/execute. Sometimes, I can generate images, but most of the time, it doesn't. I even tried using SD1.5 models, but no luck. This hasn't happened before, I reset my PC because I want my PC to be refreshed as new, and after recloning the repo, I'm having this problem now.
### Steps to reproduce the problem
1. Clone repository.
2. Edit the webui-user command args to include --use-ipex.
3. Execute webui-user and allow the installation to finish.
4. Try to generate image in txt2img or img2img.
5. See results: It can generate or -997.
### What should have happened?
WebUI should generate the image while using IPEX.
### What browsers do you use to access the UI ?
Microsoft Edge
### Sysinfo
[sysinfo-2024-09-20-08-33.json](https://github.com/user-attachments/files/17072345/sysinfo-2024-09-20-08-33.json)
### Console logs
```Shell
venv "R:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --use-ipex --opt-split-attention --medvram-sdxl
no module 'xformers'. Processing without...
No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 2.0.0a0+gite9ebda2. You might want to consider upgrading.
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
==============================================================================
You are running torch 2.0.0a0+gite9ebda2.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
Loading weights [b8d425c720] from R:\stable-diffusion-webui\models\Stable-diffusion\AOM3B4_orangemixs.safetensors
Creating model from config: R:\stable-diffusion-webui\configs\v1-inference.yaml
R:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1142: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 7.6s (prepare environment: 0.4s, import torch: 2.2s, import gradio: 0.6s, setup paths: 0.7s, initialize shared: 1.1s, other imports: 0.3s, load scripts: 1.2s, create ui: 0.8s, gradio launch: 0.4s).
Loading VAE weights specified in settings: R:\stable-diffusion-webui\models\VAE\orangemix.vae.pt
Applying attention optimization: Doggettx... done.
Model loaded in 40.9s (load weights from disk: 0.4s, create model: 0.8s, apply weights to model: 2.1s, load VAE: 33.5s, calculate empty prompt: 3.8s).
0%| | 0/31 [00:08<?, ?it/s]
*** Error completing request
*** Arguments: ('task(knvvwltwmsj4455)', <gradio.routes.Request object at 0x000001C39F2CE9E0>, 0, '1girl, bangs, bed, bed sheet, blush, breasts, cleavage, earrings, green eyes, indoors, jewelry, large breasts, long hair, looking at viewer, navel, on bed, shirt, shorts, solo, thighs, window', '(worst quality, low quality:1.4), (bad-hands-5:1.5), easynegative', [], <PIL.Image.Image image mode=RGBA size=768x1344 at 0x1C39351E080>, None, None, None, None, None, None, 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0.0, 910, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', 'upload', None, 0, False, 1, 0.5, 4, 0, 0.5, 2, 40, 'DPM++ 2M', 'Align Your Steps', False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "R:\stable-diffusion-webui\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "R:\stable-diffusion-webui\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "R:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "R:\stable-diffusion-webui\modules\img2img.py", line 242, in img2img
processed = process_images(p)
File "R:\stable-diffusion-webui\modules\processing.py", line 847, in process_images
res = process_images_inner(p)
File "R:\stable-diffusion-webui\modules\processing.py", line 988, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "R:\stable-diffusion-webui\modules\processing.py", line 1774, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "R:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 184, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "R:\stable-diffusion-webui\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "R:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 184, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "R:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "R:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "R:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "R:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "R:\stable-diffusion-webui\modules\sd_hijack_unet.py", line 50, in apply_model
result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 36, in __call__
return self.__orig_func(*args, **kwargs)
File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "R:\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "R:\stable-diffusion-webui\modules\sd_hijack_unet.py", line 96, in spatial_transformer_forward
x = block(x, context=context[i])
File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 123, in checkpoint
return func(*inputs)
File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "R:\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 278, in split_cross_attention_forward
r2 = rearrange(r1, '(b h) n d -> b n (h d)', h=h)
File "R:\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 487, in rearrange
return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
File "R:\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 410, in reduce
return _apply_recipe(recipe, tensor, reduction_type=reduction)
File "R:\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 239, in _apply_recipe
return backend.reshape(tensor, final_shapes)
File "R:\stable-diffusion-webui\venv\lib\site-packages\einops\_backends.py", line 84, in reshape
return x.reshape(shape)
RuntimeError: Native API failed. Native API returns: -997 (Command failed to enqueue/execute) -997 (Command failed to enqueue/execute)
---
```
### Additional information
My GPU is Intel Arc A750
My Intel Arc driver is version 32.0.101.6077
My command args are: --use-ipex --opt-split-attention --medvram-sdxl
iGPU is DISABLED in BIOS, only the Intel Arc is enabled. Resizable BAR enabled as well.
I have the OneAPI base toolkit installed on my system, but I don't know if that's relevant, so I included it here. | bug-report | low | Critical |
2,538,295,242 | rust | Can not disable `reference-types` feature for wasm32 target with `-C linker-plugin-lto` flag | ### Code
@alexcrichton
Code is minimized as much as possible and here is a demo repository: https://github.com/StackOverflowExcept1on/wasm-builder-regression/
- `wasm-checker` - simple CLI tool that passes wasm to parity-wasm parser (it does not support reference-types)
- `wasm-program` - this is smart contract that panics and terminates with an error
- `wasm-project` - some intermediate directory that is generated by our utility
- [check.sh source code](https://github.com/StackOverflowExcept1on/wasm-builder-regression/blob/master/wasm-project/check.sh)
```bash
git clone https://github.com/StackOverflowExcept1on/wasm-builder-regression.git
cd wasm-builder-regression/wasm-project
./check.sh
```
```
Finished `release` profile [optimized] target(s) in 8.30s
Compiling gear-wasm v0.45.1
Compiling wasm-checker v0.1.0 (/mnt/tmpfs/wasm-builder-regression/wasm-checker)
Finished `release` profile [optimized] target(s) in 1.68s
Running `/mnt/tmpfs/wasm-builder-regression/wasm-checker/target/release/wasm-checker ./target/wasm32-unknown-unknown/release/wasm_program.wasm`
Error: InvalidTableReference(128)
```
If I remove flag `-C linker-plugin-lto`, it works as described in #128511
I expected to see this happen: *I can disable `reference-types` via `-Ctarget-cpu=mvp` and `-Zbuild-std`*
Instead, this happened: *there is some kind of conflict between compiler flags?*
### Version it worked on
It most recently worked on: `nightly-2024-07-31`
### Version with regression
`rustc +nightly-2024-08-01 --version --verbose`:
```
rustc 1.82.0-nightly (28a58f2fa 2024-07-31)
binary: rustc
commit-hash: 28a58f2fa7f0c46b8fab8237c02471a915924fe5
commit-date: 2024-07-31
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
```
| P-medium,T-compiler,O-wasm,C-bug,regression-untriaged | low | Critical |
2,538,311,594 | react-native | Single-line TextInput sized as multiline on Fabric ShadowNode | ### Description
Having a non-multiline `TextInput` component on new arch will be sized by the layout engine/shadow node as if it were multiline unless one explicitly sets `numberOfLines={1}`.
### Steps to reproduce
1. With new architecture enabled on Android
2. Type text into a non-multiline `TextInput`
3. Observe the height keep increasing as you type longer and longer without text breaking lines
### React Native Version
0.75.3
### Affected Platforms
Runtime - Android
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 15.0
CPU: (10) arm64 Apple M1 Pro
Memory: 91.72 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.11.0
path: ~/.asdf/installs/nodejs/20.11.0/bin/node
Yarn:
version: 3.6.4
path: ~/.asdf/installs/nodejs/20.11.0/bin/yarn
npm:
version: 10.2.4
path: ~/.asdf/plugins/nodejs/shims/npm
Watchman:
version: 2023.03.27.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.12.1
path: /Users/joel/.asdf/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK:
API Levels:
- "28"
- "30"
- "31"
- "32"
- "33"
- "34"
Build Tools:
- 29.0.2
- 30.0.2
- 30.0.3
- 31.0.0
- 32.0.0
- 33.0.0
- 33.0.1
- 34.0.0
System Images:
- android-32 | Google APIs ARM 64 v8a
- android-33 | Google Play ARM 64 v8a
- android-34 | Google APIs ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2411.12071903
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/javac
Ruby:
version: 3.0.3
path: /Users/joel/.asdf/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.3
wanted: 0.75.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
See screenshots.
```
### Reproducer
https://github.com/oblador/react-native-textinput-height-issue
### Screenshots and Videos
| Fabric | Paper |
|--------|--------|
|  |  |
| Issue: Author Provided Repro,Resolution: PR Submitted,Component: TextInput,Type: New Architecture | low | Minor |
2,538,360,275 | opencv | Charuco board is only detected if I set the size + 2 larger than the actual board | ### System Information
OpenCV 4.9/Python 3.11.9 on NixOS
### Detailed description
I have the following image:

I am running this on the following code:
```python
import cv2 as cv
from cv2 import aruco
image = cv.imread('calibration-plate.png')
image = cv.resize(image, (3450, 2850), interpolation=cv.INTER_NEAREST)
dictionary = aruco.getPredefinedDictionary(aruco.DICT_7X7_1000)
board = aruco.CharucoBoard((23, 19), 0.015, 0.009, dictionary)
print(board.getChessboardCorners().shape, board.getIds().shape)
detector = cv.aruco.CharucoDetector(board)
corners, ids, markers, markerIds = detector.detectBoard(image)
if markers is not None:
image = cv.aruco.drawDetectedMarkers(image, markers, markerIds)
if corners is not None:
image = cv.aruco.drawDetectedCornersCharuco(image, corners, ids)
cv.imwrite('res.png', image)
```
This produces the following `res.png`:

Notice that markers are found, but chessboard intersections are not.
If I make the following change
```diff
-board = aruco.CharucoBoard((23, 19), 0.015, 0.009, dictionary)
+board = aruco.CharucoBoard((25, 21), 0.015, 0.009, dictionary)
```
Then I get the following new `res.png`:

Note that I now do have chessboard intersections, but their ids are wrong. This board is not 25x21, so it makes sense that the ids would be wrong. What I don't understand is why chessboard intersections are only detected with this (incorrect) board configuration.
### Steps to reproduce
See description above.
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | wontfix,category: objdetect | low | Minor |
2,538,435,881 | rust | Should `#[expect(warnings)]` at some point warn that there are no warnings in the annotated code? | ### Code
```rust
#[expect(warnings)]
pub fn foo() {}
```
### Current output
`rustc` is happy with an expect annotation on code, that does not trigger a warning
### Desired output
probably this should ask to remove the expect annotation?
### Rationale and extra context
`expect(warnings)` is probably not the most idiomatic thing to do, but I found it in my code. And I would think that this behaves like other `expect` annotations.
### Other cases
_No response_
### Rust Version
```
rustc 1.83.0-nightly (f79a912d9 2024-09-18)
binary: rustc
commit-hash: f79a912d9edc3ad4db910c0e93672ed5c65133fa
commit-date: 2024-09-18
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
### Anything else?
_No response_ | A-lints,A-diagnostics,T-compiler,C-bug,F-lint_reasons | low | Minor |
2,538,454,441 | langchain | Anthropic's prompt caching in langchain does not work with ChatPromptTemplate. | ### URL
https://python.langchain.com/docs/how_to/llm_caching/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I have not found any documentation for prompt caching in the langchain documentation. There seems to be only one post on twitter regarding prompt caching in langchain. I am trying to implement prompt caching in my rag system. I am using history aware retriever.
I have instantiated the model like this:
llm_claude = ChatAnthropic(
model="claude-3-5-sonnet-20240620",
temperature=0.1,
extra_headers={"anthropic-beta": "prompt-caching-2024-07-31"}
)
And using the ChatPromptTemplate like this:
contextualize_q_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_q_system_prompt),
("human", "{input}"),
]
)
I am not able to find a way to include prompt caching with this.
I tried making the prompt like this, but still doesnt work.
prompt = ChatPromptTemplate.from_messages([
SystemMessage(content=contextualize_q_system_prompt, additional_kwargs={"cache_control": {"type": "ephemeral"}}),
HumanMessage(content= "{input}")
])
Please help me with how I should enable prompt caching in langchain.
### Idea or request for content:
Langchain documentation should be updated with how to use prompt caching with different prompt templates. And especially with a RAG system. | 🤖:docs,investigate | low | Major |
2,538,479,807 | ant-design | Tour component with navigation | ### Reproduction link
[](https://codesandbox.io/p/sandbox/gracious-dijkstra-krystw)
### Steps to reproduce
1. Create a provider that will store refs
2. Add the Tour component to it
3. In nextButtonProps, add navigation between targets
4. Sometimes they will have time to be highlighted, but this does not always work
### What is expected?
Highlight target after navigate to another page
<img width="300" alt="Снимок экрана 2024-09-20 в 13 35 23" src="https://github.com/user-attachments/assets/9d33e474-f17f-4ef7-8bc7-b28c486cbf80">
<img width="300" alt="Снимок экрана 2024-09-20 в 13 35 28" src="https://github.com/user-attachments/assets/7a7f9b8f-ceb3-491c-9dd9-2ba1fdf95f78">
### What is actually happening?
The target is not highlighted
<img width="300" alt="Снимок экрана 2024-09-20 в 13 35 39" src="https://github.com/user-attachments/assets/a58eb09d-0067-4638-a14b-1ae9ecbcb51f">
<img width="300" alt="Снимок экрана 2024-09-20 в 13 35 45" src="https://github.com/user-attachments/assets/616730fe-05e7-42fa-9c06-1e6bb9e38dc3">
| Environment | Info |
| --- | --- |
| antd | 5.20.6 |
| React | 18.2.0 |
| System | MacOS 14.5 |
| Browser | Google Chrome 128 |
---
Maybe I don't fully understand how refs work, I tried many different options, but ideally it would work out of the box
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,improvement | low | Minor |
2,538,512,924 | godot | Could not resolve SDK "Godot.NEt.Sdk" | ### Tested versions
Godot v4.3.0.stable
### System information
Windows 10 - Godot v4.3.0.stable
### Issue description
I recently decided to try out C# in godot and installed dotnet along with setting godot's network mode to online, But when I try and build a C# script I get the following errors
`MSB4236: The SDK 'Godot.NET.Sdk/4.3.0' specified could not be found. C:\Users\Mike\Desktop\c#test\test\Test.csproj(0,0)`

This is what my .csproj file looks like
### Steps to reproduce
Install godot mono, Install dotnet, Make a project and try building a C# script
### Minimal reproduction project (MRP)
[c#test.zip](https://github.com/user-attachments/files/17074095/c.test.zip)
| topic:buildsystem,needs testing,topic:dotnet | low | Critical |
2,538,550,093 | deno | Should `deno cache` ignore specifiers it does not understand? | Vite projects (and other frontend ones) typically contain non-standard imports like importing `.svg` files. Deno errors when it encounters such a file upon calling `deno cache`.
## Steps to reproduce
1. Run `npm create vite`
2. Pick the "Vanilla" -> "TypeScript" preset
3. `cd` into the project folder and run `deno cache src/App.tsx`
Output:
```sh
error: Expected a JavaScript or TypeScript module, but identified a Unknown module. Importing these types of modules is currently not supported.
Specifier: file:///workspaces/deno2_test/src/assets/react.svg
at file:///workspaces/deno2_test/src/App.tsx:3:23
```
Version: Deno 2.0.0-rc.4+66fb81e
| needs discussion | low | Critical |
2,538,592,458 | flutter | Inconsistent behavior of external requests on Flutter for web due to issue in service worker | ### Steps to reproduce
I have created a GitHub repo to showcase the issue: https://github.com/Frank3K/flutter_service_worker_bug
## Prerequisites
Ensure you have a tool installed that can run an HTTP server to serve the contents of a directory. Here, Python 3 is used as an example. Other examples can be found at https://gist.github.com/willurd/5720255.
## Reproduction steps
1. Clone the repository on https://github.com/Frank3K/flutter_service_worker_bug.
2. Install dependencies: `flutter pub get`.
3. Compile for web: `flutter build web`
- Note that `flutter run -d chrome` is not sufficient, since service workers are empty in `run` mode.
4. Serve the output, e.g., using `python3 -m http.server 8080 -d build/web`.
5. Open the following URLs in Chrome:
- http://this-is-a-very-long-url-thats-sooooooooooooooooooooooooooo-long.localhost:8080/
- http://short.localhost:8080/
6. Open the network tools.
7. In each tab, press the download (FAB) button.
8. Observe that for the long URL, the request is processed by the service worker (a ⚙ is present), and for the short URL, it is not.
### Expected results
## Expected results
The expected result is that the service worker does not handle external requests.
## Cause of this discrepancy
Looking into the service worker code, we find the following code: [link](https://github.com/flutter/flutter/blob/2528fa95eb3f8e107b7e28651a61305d960eef85/packages/flutter_tools/lib/src/web/file_generators/js/flutter_service_worker.js#L87-L95):
```js
var origin = self.location.origin;
var key = event.request.url.substring(origin.length + 1);
// Redirect URLs to the index.html
if (key.indexOf('?v=') != -1) {
key = key.split('?v=')[0];
}
if (event.request.url == origin || event.request.url.startsWith(origin + '/#') || key == '') {
key = '/';
}
```
Depending on the length of the URLs, the `substring` returns the empty string, which later on becomes `/`.
### Suggested fix
Bail out when the request is to an external URL. For example, using:
```js
var requestURL = new URL(event.request.url);
if (requestURL.origin !== origin) {
return;
}
```
### Actual results
Some external requests are processed by the service worker in Flutter for web and other requests are not, depending on the length of the application URL in combination with the request URL.
For example, given an application URL: https://www.some-website-example.com that makes the following requests:
- A GET request to https://www.some-api.com/status
- A GET request to https://www.another-api.com/items/12345
The first request will be processed (and cached) by the service worker, while the second one will not.
### Code sample
The source for an application showcasing the issue can be found over at:
https://github.com/Frank3K/flutter_service_worker_bug
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
Long URL: service worker handling the request:
<img width="908" alt="image" src="https://github.com/user-attachments/assets/8fd71be3-3c0f-4577-90ba-4f6ec4f14171">
Short URL: service worker not handling the request:
<img width="768" alt="image" src="https://github.com/user-attachments/assets/80d1b7b7-7551-44f9-b484-9c9450ba42e5">
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-x64, locale en-NL)
• Flutter version 3.24.3 on channel stable at /Users/<redacted>/.asdf/installs/flutter/3.24.3
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (9 days ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/<redacted>/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /Users/<redacted>/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
! iOS 18.0 Simulator not installed; this may be necessary for iOS and macOS development.
To download and install the platform, open Xcode, select Xcode > Settings > Platforms,
and click the GET button for the required platform.
For more information, please visit:
https://developer.apple.com/documentation/xcode/installing-additional-simulator-runtimes
! CocoaPods 1.12.1 out of date (1.13.0 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-x64 • macOS 14.6.1 23G93 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.138
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| tool,platform-web,has reproducible steps,assigned for triage,team-web,found in release: 3.24,found in release: 3.26 | low | Critical |
2,538,604,839 | godot | Building with mono can't handle special characters in file path | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8] and v4.4.dev2.mono.official [97ef3c837]
### System information
Godot v4.3.stable.mono - Ubuntu 20.04.6 LTS (Focal Fossa) - X11 - OpenGL 3 (Compatibility) - Mesa DRI Intel(R) HD Graphics 4000 (IVB GT2) - Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz (4 Threads)
### Issue description
When trying to compile a C# project using Godot Mono It fails and returns the error:
```
MSB4019: The imported project "/path/to/project/Zombie BurgerZ: Oh No Edition/.godot/mono/temp/obj/Zombie BurgerZ- Oh No Edition.csproj.*.props" was not found. Confirm that the expression in the Import declaration "/path/to/project/Zombie BurgerZ: Oh No Edition/.godot/mono/temp/obj/Zombie BurgerZ- Oh No Edition.csproj.*.props" is correct, and that the file exists on disk. /usr/share/dotnet/sdk/8.0.401/Current/Microsoft.Common.props(74,3)
```
Removing special characters from the project directory name and `assembly_name` (located in `project.godot`) fixes the issue.
My project was created in an older version of Godot where it didn't enforce having no special characters in project directory names. Godot should warn users when opening a project that has special characters in its name or warn users when migrating their projects that have special characters to newer versions.
### Steps to reproduce
- Create and import a project with special characters in the directory name (e.g `:`) using an older version of Godot
- Open the project in a newer version
- Create a C# script and try to compile
### Minimal reproduction project (MRP)
[minimal reproduction project](https://github.com/user-attachments/files/17074558/mvp.test.zip)
| enhancement,discussion,topic:editor,topic:dotnet | low | Critical |
2,538,624,462 | pytorch | torch.func.vjp and sparse tensors: "Sparse CSR tensors do not have strides" | Sparse tensors seem not to work with torch.func :
```python
import torch
x=torch.rand(10)
dense=torch.rand(1,10)
sparse=dense.to_sparse_csr()
def test_dense(x):
return (dense@x)[0]
def test_sparse(x):
return (sparse@x)[0]
torch.func.grad(test_dense)(x) # works
test_sparse(x) # works
torch.func.grad(test_sparse)(x) # fails
#RuntimeError: Sparse CSR tensors do not have strides
def test_dense_minimal():
return dense
def test_sparse_minimal():
return sparse.to_dense()
torch.func.vjp(test_dense_minimal)[1](x) # works
test_sparse_minimal() # works
torch.func.vjp(test_sparse_minimal)[1](x) # fails
#RuntimeError: Sparse CSR tensors do not have strides
```
### Versions
pytorch 2.4.1+cu121
I haven't found an open issue about here. Are there any plans to fix this incompatibility?
Cheers, Felix
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @zou3519 @Chillee @samdow @kshitij12345 | module: sparse,triaged,module: functorch | low | Critical |
2,538,815,460 | vscode | Enabling the `issueReporter.experimental.auxWindow` setting prevents Issue Reporter window from being focused | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0
- OS Version: MacOS Sonoma 14.7 (23H124)
Steps to Reproduce:
1. With `issueReporter.experimental.auxWindow` enabled, open the Issue Reporter through the "Help: Report Issue..." command
2. Move the Issue Reporter window behind another window and call "Help: Report Issue..." again -- the behavior I observe is the Issue Reporter window stays hidden behind the other window
3. Disable `issueReporter.experimental.auxWindow` and run "Help: Report Issue..." again -- the window should now pop back into the foreground and into focus
Seen as part of https://github.com/microsoft/vscode/issues/229107
Able to reproduce in the current 1.94 VS Code Insiders build as well.
| bug,issue-reporter | low | Critical |
2,538,836,069 | vscode | Window size becomes the size of the monitor when transferring maximized window across differently scaled monitors | Start by opening the window, restoring it (unmaximize), move to 1x scale monitor and size is smallist

Drag to second 2x monitor and maximize by dragging it to the top of the window

Exit VS Code
Start VS Code
Win+shift+left to transfer to 1x scale monitor
Drag down to unmaximize

🐛 the window is the size of the first monitor. At some point it looks like we're saving the maximized dimensions of the window to the non-maximized size. This requires manually resizing the window in order to make it usable again. I would expect us to keep track of the unmaximized size even when transferring maximized windows across monitors
| upstream,electron,workbench-window | low | Major |
2,538,873,765 | go | mime: ParseMediaType should not return an error if mediatype is empty | ### Go version
go version 1.23.0
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/alden/Library/Caches/go-build'
GOENV='/Users/alden/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/alden/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/alden/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.1/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.1/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/alden/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/_h/_w_mjtt91s1dws8zlxnnjpzm0000gn/T/go-build1772735711=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I have a file server that returns `Content-Disposition` headers like this: `;filename*=UTF-8''hello-world.pdf`
Example: https://go.dev/play/p/rxRNVZGDOQu
### What did you see happen?
`mime.ParseMediaType` fails with the error "mime: no media type" ([from here in code](https://github.com/golang/go/blob/165bf241f2f7c72cffd83e278d674ae3ddbd72a1/src/mime/mediatype.go#L105-L107))
### What did you expect to see?
- `mime.ParseMediaType` succeeds, returning mediatype `""` and `params["filename"]` `hello-world.pdf`
- Note that all modern browsers and email clients parse a blank mediatype correctly
- [rfc6266](https://www.rfc-editor.org/rfc/rfc6266#page-5) and [rfc2183](https://www.rfc-editor.org/rfc/rfc2183.html#section-2.8) both say `Unknown or unhandled disposition types SHOULD be handled by
recipients the same way as "attachment"`, and presumably an empty string disposition type fits into that
| NeedsInvestigation | low | Critical |
2,538,888,425 | godot | Unable to vertical scroll animation bezier editor with mousewheel | ### Tested versions
Reproducible in:
- v4.3.stable.official [77dcf97d8]
- v4.2.1.stable.official [b09f793f5]
Not reproducible in:
- v4.1.4.stable.official [fe0e8e557]
### System information
Godot v4.3.stable - Debian GNU/Linux trixie/sid trixie - Wayland - Vulkan (Mobile) - dedicated AMD Radeon RX 7600 (RADV NAVI33) - AMD Ryzen 5 7600 6-Core Processor (12 Threads)
### Issue description
I'm unable to scroll or scale vertically in the Animation dock's bezier editor.
In v4.3 / v4.2:
- Plain `Mousewheel` = no effect
- `Ctrl + Mousewheel` = horizontal zoom
- `Alt + Mousewheel` = no effect
- `Shift + Mousewheel` = no effect
- `Ctrl + Shift + Mousewheel` = horizontal zoom
- `Ctrl + Alt + Mousewheel` = no effect
- `Alt + Shift + Mousewheel` = no effect
- `Ctrl + Alt + Shift + Mousewheel` = vertical zoom
In v4.1 / v4.0:
- Plain `Mousewheel` = **vertical scroll**
- `Ctrl + Mousewheel` = horizontal zoom
- `Alt + Mousewheel` = **vertical scroll**
- `Shift + Mousewheel` = no effect
- `Ctrl + Shift + Mousewheel` = horizontal zoom
- `Ctrl + Alt + Mousewheel` = no effect
- `Alt + Shift + Mousewheel` = no effect
- `Ctrl + Alt + Shift + Mousewheel` = vertical zoom
Further, the vertical zoom part is confounding because the [documentation](https://docs.godotengine.org/en/stable/tutorials/animation/animation_track_types.html#bezier-curve-track), in a note added with 4.3, states that:
> zoom in and out on the time axis ... with `Ctrl + Shift + Mouse wheel`. Using **`Ctrl + Alt + Mouse wheel`** will zoom in and out on the Y axis
And to add to the confusion, there is at least one [previous report](https://github.com/godotengine/godot/issues/75476) which indicates that just `Alt+Mousewheel` would zoom vertically as of 4.0.1, but even if I try that exact version I get the same result as 4.1, above! :confused:
So it might be a platform (Win/Linux+Wayland) difference? Or perhaps with all my version swapping my editor settings have got messed up somehow? Though I didn't think you could change mousewheel binds? :confused:
### Steps to reproduce
1. Create a bezier track in an animation
2. Switch to bezier editor
3. Try to scroll vertically
### Minimal reproduction project (MRP)
n/a | bug,topic:editor,needs testing,topic:animation | low | Minor |
2,538,899,267 | vscode | Devtools error: No registered selector for ID: ms-vscode.npm-command | Starting up code-oss I see this:

| bug,debt | low | Critical |
2,538,933,143 | ollama | Build CPU only image | ### What is the issue?
I'd be interested in only building a CPU docker image for ppc arch. I tried to do that for arm64 but it doesn't work perfectly either so I'm wondering if that is possible at all with only one big Dockerfile as it is now?
Ideally I'd like to have something like `PLATFORM=linux/arm64 TARGETARCH=arm64 DEVICE=cpu scripts/build_docker.sh`
Maybe someone can enlighten me how to do this.
Thank you!
### OS
Linux, Docker
### GPU
_No response_
### CPU
Other
### Ollama version
0.3.11 | feature request,linux,docker | low | Minor |
2,538,962,099 | go | x/tools/gopls: source.doc code action: missing NewT func that returns unexported type t | The "Show documentation" code action has missing functions. Specifically, a function NewT that returns an instance of a type is grouped under that type as a "constructor" function. But if the type is unexported, neither the type nor its constructors are shown, even if they are exported.
Repro:
```
package a
type A int
func NewA() A { return 0 }
type a int
func Newa() a { return 0 }
```
<img width="800" alt="Screenshot 2024-09-20 at 10 24 36 AM" src="https://github.com/user-attachments/assets/9bd63356-2c76-450b-948f-423138fa8b82">
| gopls,Tools | low | Minor |
2,539,003,225 | rust | wasm32-wasip1 depends on libc memset with no_std | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
I've looked high and low between the WASM/WASI specifications and can't find what is the "correct" behavior here but the current rust behavior seems wrong to me.
I don't believe we should be trying to import memset from the "env" module:
```
(func $import0 (import "env" "memset") (param i32 i32 i32) (result i32))
```
[wasm3](https://github.com/wasm3/wasm3) engine can't run this code either due to this:
```bash
$ wasm3 target/wasm32-wasip1/release/wasm_br_test.wasm
Error: missing imported function ('env.memset')
```
### Code
```Rust
#![no_std]
#![no_main]
#[no_mangle]
pub fn _start() {
let _asdf = [0; 40];
}
#[panic_handler]
fn panic(_info: &core::panic::PanicInfo) -> ! {
loop {}
}
```
### Cargo.toml
```toml
[lib]
crate-type = ["cdylib"]
[profile.release]
lto = true
#opt-level = 's'
opt-level = 0
codegen-units = 1
panic = "abort"
strip = true
```
### .cargo/config.toml
```toml
[build]
target = "wasm32-wasip1"
[target.wasm32-wasip1]
rustflags = ["-C", "link-arg=-zstack-size=65520",]
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (f79a912d9 2024-09-18)
binary: rustc
commit-hash: f79a912d9edc3ad4db910c0e93672ed5c65133fa
commit-date: 2024-09-18
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
### wasm generated binary
```
(module
(type $type0 (func (param i32 i32 i32) (result i32)))
(type $type1 (func))
(func $import0 (import "env" "memset") (param i32 i32 i32) (result i32))
(table $table0 1 1 funcref)
(memory $memory0 1)
(global $global0 (mut i32) (i32.const 65520))
(export "memory" (memory $memory0))
(export "_start" (func $func1))
(func $func1
(local $var0 i32) (local $var1 i32) (local $var2 i32) (local $var3 i32) (local $var4 i32) (local $var5 i32) (local $var6 i32)
global.get $global0
local.set $var0
i32.const 160
local.set $var1
local.get $var0
local.get $var1
i32.sub
local.set $var2
local.get $var2
global.set $global0
i32.const 160
local.set $var3
i32.const 0
local.set $var4
local.get $var2
local.get $var4
local.get $var3
call $import0
drop
i32.const 160
local.set $var5
local.get $var2
local.get $var5
i32.add
local.set $var6
local.get $var6
global.set $global0
return
)
)
```
| T-compiler,O-wasm,C-discussion,O-wasi | medium | Critical |
2,539,071,443 | ui | [feat]: Use tailwindcss-radix plugin for styling components | ### Feature description
Use https://www.npmjs.com/package/tailwindcss-radix for styling components depending on the radix attributes
Using the plugin will improve intellisense and readability
### Affected component/components
Tabs, sheet, toast, select, switch, toggle, popover, dialog, etc...
### Additional Context
I'm ready to contribute this
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,539,114,060 | rust | rustdoc reports diagnostics against `very/../long/PATH.md` instead of normalizing to `PATH.md` | The same repro as #130470 can be used here: https://github.com/fasterthanlime/readme-md-error-reporting
When running `cargo +stage1 t`, the output is:
```
---- src/../README.md - (line 5) stdout ----
error: expected `;`, found `}`
--> src/../README.md:7:16
|
3 | let x = 234 // no semicolon here! oh no!
| ^ help: add `;` here
4 | }
| - unexpected token
```
But I would argue it should be this:
```
---- README.md - (line 5) stdout ----
error: expected `;`, found `}`
--> README.md:7:16
|
3 | let x = 234 // no semicolon here! oh no!
| ^ help: add `;` here
4 | }
| - unexpected token
```
I would caution against using [canonicalize](https://doc.rust-lang.org/stable/std/fs/fn.canonicalize.html) — which resolves symbolic links etc. and requires the path to exist on disk right now, and rather advise that y'all simply have a state machine working on a stack of path elements, going through:
* []
* ["src"]
* [] // just popped due to ".."
* ["README.md"] | A-diagnostics,T-compiler,C-bug | low | Critical |
2,539,120,878 | kubernetes | PodScheduled status.conditions field does not have an entry in `managedFields` for Pod | ### What happened?
In the managed fields ownership for a Pod, no owner entry is present for `f.conditions: k:{"type":"PodScheduled"}`
### What did you expect to happen?
Based on https://github.com/kubernetes/kubernetes/blob/v1.31.1/pkg/kubelet/status/status_manager.go#L639-L640 I expect all these conditions to be owned by the `kubelet` manager.
### How can we reproduce it (as minimally and precisely as possible)?
Schedule a pod
`kubectl get pod -o yaml --show-managed-fields`
Look for `k:{"type":"PodScheduled"}` and it will not be present
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
```
</details>
### Cloud provider
<details>
running on my local machine
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ uname -a
# paste output here
Linux kind-control-plane 6.6.26-linuxkit #1 SMP Sat Apr 27 04:13:19 UTC 2024 aarch64 GNU/Linux
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
kind create cluster
</details>
### Container runtime (CRI) and version (if applicable)
_No response_
### Related plugins (CNI, CSI, ...) and versions (if applicable)
_No response_ | kind/bug,sig/node,triage/accepted | low | Critical |
2,539,150,370 | deno | UX: Successful `deno install` should print something to terminal | Currently, when a `deno install` command completes successfully you get no output. Whilst this is common in the unix ecosystem, it's uncommon in the JS one. Most JS developers likely expect a command to print out that it did something.
```sh
$ deno install
$
```
We should print some sort of success message. Example
```sh
$ deno install
+ package-1
+ package-2
```
pnpm's output is quite good in that case.
Version: Deno x.x.x
| feat | low | Minor |
2,539,151,438 | tauri | [bug] pkg-config cant see gdk-3.0.pc, when its clearly presented | ### Describe the bug
```
Running BeforeDevCommand (`npm run dev`)
> cross-platform@0.1.0 dev
> vite
VITE v5.4.3 ready in 211 ms
➜ Local: http://localhost:1420/
Info Watching /home/bittermann/projects/cross-platform/src-tauri for changes...
Compiling glib-sys v0.18.1
Compiling gobject-sys v0.18.0
Compiling gio-sys v0.18.1
Compiling gdk-sys v0.18.0
error: failed to run custom build command for `gdk-sys v0.18.0`
Caused by:
process didn't exit successfully: `/home/bittermann/projects/cross-platform/src-tauri/target/debug/build/gdk-sys-439577ead3d06791/build-script-build` (exit status: 1)
--- stdout
cargo:rerun-if-env-changed=GDK_3.0_NO_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG_PATH
cargo:rerun-if-env-changed=PKG_CONFIG_PATH
cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG_LIBDIR
cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG_SYSROOT_DIR
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
--- stderr
pkg-config exited with status code 1
> PKG_CONFIG_PATH=/nix/store/m1v13sqd0h9qfvynrj20aqx17d5chkin-gtk+3-3.24.41-dev/lib/pkgconfig:/nix/store/gdr1ghkakyjfdj8bc6n0virwllm4zpwz-glib-2.80.2-dev/lib/pkgconfig:/nix/store/ljrz822j7b6db3jnmw1jiafgrm8armcw-cairo-1.18.0-dev/lib/pkgconfig:/nix/store/gqkw1wn30p059dd1qzip7iz8zf810rx6-pango-1.52.2-dev/lib/pkgconfig:/nix/store/gx938f6w31a0alyqm81c6fp70br4xqqy-gdk-pixbuf-2.42.11-dev/lib/pkgconfig:/nix/store/srmh4pyhivcjsmaf9mjvafwp2wgynbdp-webkitgtk-2.44.2+abi=4.1-dev/lib/pkgconfig:/nix/store/cmfa2zrlji2lg5ng9ds9zwp38z3zw3i5-librsvg-2.58.0-dev/lib/pkgconfig PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 pkg-config --libs --cflags gdk-3.0 gdk-3.0 >= 3.22
The system library `gdk-3.0` required by crate `gdk-sys` was not found.
The file `gdk-3.0.pc` needs to be installed and the PKG_CONFIG_PATH environment variable must contain its parent directory.
PKG_CONFIG_PATH contains the following:
- /nix/store/m1v13sqd0h9qfvynrj20aqx17d5chkin-gtk+3-3.24.41-dev/lib/pkgconfig
- /nix/store/gdr1ghkakyjfdj8bc6n0virwllm4zpwz-glib-2.80.2-dev/lib/pkgconfig
- /nix/store/ljrz822j7b6db3jnmw1jiafgrm8armcw-cairo-1.18.0-dev/lib/pkgconfig
- /nix/store/gqkw1wn30p059dd1qzip7iz8zf810rx6-pango-1.52.2-dev/lib/pkgconfig
- /nix/store/gx938f6w31a0alyqm81c6fp70br4xqqy-gdk-pixbuf-2.42.11-dev/lib/pkgconfig
- /nix/store/srmh4pyhivcjsmaf9mjvafwp2wgynbdp-webkitgtk-2.44.2+abi=4.1-dev/lib/pkgconfig
- /nix/store/cmfa2zrlji2lg5ng9ds9zwp38z3zw3i5-librsvg-2.58.0-dev/lib/pkgconfig
HINT: you may need to install a package such as gdk-3.0, gdk-3.0-dev or gdk-3.0-devel.
warning: build failed, waiting for other jobs to finish...
```
but gdk-3.0.pc is presented
```
[bittermann@nixos:~/projects/cross-platform]$ l /nix/store/m1v13sqd0h9qfvynrj20aqx17d5chkin-gtk+3-3.24.41-dev/lib/pkgconfig
total 40K
dr-xr-xr-x 1 root root 318 Jan 1 1970 .
dr-xr-xr-x 1 root root 18 Jan 1 1970 ..
-r--r--r-- 1 root root 361 Jan 1 1970 gail-3.0.pc
-r--r--r-- 1 root root 696 Jan 1 1970 gdk-3.0.pc
-r--r--r-- 1 root root 696 Jan 1 1970 gdk-broadway-3.0.pc
-r--r--r-- 1 root root 696 Jan 1 1970 gdk-wayland-3.0.pc
-r--r--r-- 1 root root 696 Jan 1 1970 gdk-x11-3.0.pc
-r--r--r-- 1 root root 693 Jan 1 1970 gtk+-3.0.pc
-r--r--r-- 1 root root 693 Jan 1 1970 gtk+-broadway-3.0.pc
-r--r--r-- 1 root root 492 Jan 1 1970 gtk+-unix-print-3.0.pc
-r--r--r-- 1 root root 693 Jan 1 1970 gtk+-wayland-3.0.pc
-r--r--r-- 1 root root 693 Jan 1 1970 gtk+-x11-3.0.pc
```
### Reproduction
entering dev shell with ``nix-shell``
initializng project with ``npm create tauri-app@latest -- --rc``
running node_packages/.bin/tauri dev
default.nix
```nix
{
pkgs ? import <nixpkgs> {},
runScript ? ''$PWD/node_modules/.bin/tauri''
}:
let
android-nixpkgs = pkgs.callPackage (import (builtins.fetchGit {
url = "https://github.com/tadfisher/android-nixpkgs.git";
})) {
# Default; can also choose "beta", "preview", or "canary".
channel = "stable";
};
android-sdk = android-nixpkgs.sdk (sdkPkgs: with sdkPkgs; [
build-tools-34-0-0
cmdline-tools-latest
emulator
platform-tools
platforms-android-34
# Other useful packages for a development environment.
ndk-26-1-10909125
# skiaparser-3
# sources-android-34
]);
in
pkgs.buildFHSUserEnv {
name = "tauri-env";
targetPkgs = pkgs:
[
android-sdk
]++(with pkgs; [
clang
gradle
jdk
rustup
webkitgtk_4_1
curl
wget
xorg.libxcb
openssl
libayatana-appindicator
librsvg
androidStudioPackages.stable
pkg-config
nodejs_22
glib
gtk3
gdk-pixbuf
pango
cairo
kotlin-language-server
nodePackages.typescript-language-server
vscode-langservers-extracted
]);
runScript = runScript;
profile = ''
export ANDROID_HOME=${android-sdk}/share/android-sdk
export ANDROID_SDK_ROOT=${android-sdk}/share/android-sdk
export NDK_HOME=${android-sdk}/share/android-sdk/ndk/26.1.10909125
export JAVA_HOME=${pkgs.jdk.home}
export HOSTNAME=tauri-dev-env
export PKG_CONFIG_PATH="${pkgs.gtk3.dev}/lib/pkgconfig:${pkgs.glib.dev}/lib/pkgconfig:${pkgs.cairo.dev}/lib/pkgconfig:${pkgs.pango.dev}/lib/pkgconfig:${pkgs.gdk-pixbuf.dev}/lib/pkgconfig:${pkgs.webkitgtk_4_1.dev}/lib/pkgconfig:${pkgs.librsvg.dev}/lib/pkgconfig"
'';
}
```
shell.nix
```nix
{ pkgs ? import <nixpkgs> {}}:
let
environment = import ./default.nix { inherit pkgs; runScript = ''bash'';};
in
environment.env
```
### Expected behavior
normal execution of ``tauri dev``
### Full `tauri info` output
```text
[✔] Environment
- OS: NixOS 24.5.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.44.2
✔ rsvg2: 2.58.0
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.26.0 (1980-01-01)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 22.2.0
- npm: 10.7.0
[-] Packages
- tauri 🦀: 2.0.0-rc.10
- tauri-build 🦀: 2.0.0-rc.9
- wry 🦀: 0.43.1
- tao 🦀: 0.30.0
- @tauri-apps/api : 2.0.0-rc.4
- @tauri-apps/cli : 2.0.0-rc.12
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.0-rc.3
- @tauri-apps/plugin-shell : 2.0.0-rc.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
OS: NixOS 24.05
Nixpkgs 24.05 | type: bug,status: needs triage,platform: Nix/NixOS | low | Critical |
2,539,185,684 | rust | RFC #1733 (Trait Aliases): compilation crash when putting "diagnostic::on_unimplemented" on alias | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=7b59ad99144e849738f87583bb664c44
### Code
```Rust
#![feature(trait_alias)]
trait Test {
}
#[diagnostic::on_unimplemented(
message="message",
label="label",
note="note"
)]
trait Alias = Test;
// Use trait alias as bound on type parameter.
fn foo<T: Alias>(v: &T) {
}
pub fn main() {
foo(&1);
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
nightly 2024-09-19 506f22b4663f3e756e1e
```
### Error output
```
thread 'rustc' panicked at compiler/rustc_hir_analysis/src/collect.rs:1689:22:
$ident: found Item { ident: Alias#0, owner_id: DefId(0:7 ~ playground[b3b9]::Alias), kind: TraitAlias(Generics { params: [], predicates: [], has_where_clause_predicates: false, where_clause_span: src/main.rs:13:19: 13:19 (#0), span: src/main.rs:13:12: 13:12 (#0) }, [Trait(PolyTraitRef { bound_generic_params: [], trait_ref: TraitRef { path: Path { span: src/main.rs:13:15: 13:19 (#0), res: Def(Trait, DefId(0:3 ~ playground[b3b9]::Test)), segments: [PathSegment { ident: Test#0, hir_id: HirId(DefId(0:7 ~ playground[b3b9]::Alias).1), res: Def(Trait, DefId(0:3 ~ playground[b3b9]::Test)), args: None, infer_args: false }] }, hir_ref_id: HirId(DefId(0:7 ~ playground[b3b9]::Alias).2) }, span: src/main.rs:13:15: 13:19 (#0) }, None)]), span: src/main.rs:13:1: 13:20 (#0), vis_span: src/main.rs:13:1: 13:1 (#0) }
stack backtrace:
0: 0x7ffb9d1c3c9a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h931fa8a693ee76bf
1: 0x7ffb9d9577d7 - core::fmt::write::ha344a87a3011b8f5
2: 0x7ffb9e8608b3 - std::io::Write::write_fmt::ha4f1ac9a9087f93c
3: 0x7ffb9d1c3af2 - std::sys::backtrace::BacktraceLock::print::h2930c514ae5653cd
4: 0x7ffb9d1c6271 - std::panicking::default_hook::{{closure}}::hc7a5b0b766fd5663
5: 0x7ffb9d1c60a4 - std::panicking::default_hook::h9d542ef7bbd51eed
6: 0x7ffb9c2c3a4f - std[ee7033f0ed262c3b]::panicking::update_hook::<alloc[3af4d0b46ca260e8]::boxed::Box<rustc_driver_impl[2bd8fe8419d4e53d]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7ffb9d1c6988 - std::panicking::rust_panic_with_hook::h426e85c59c56abd7
8: 0x7ffb9d1c675a - std::panicking::begin_panic_handler::{{closure}}::hdbed8eca2a1d0322
9: 0x7ffb9d1c4149 - std::sys::backtrace::__rust_end_short_backtrace::hfaec60235a9405e1
10: 0x7ffb9d1c641c - rust_begin_unwind
11: 0x7ffb99fe10b0 - core::panicking::panic_fmt::he0c447514557f5eb
12: 0x7ffb9c37b861 - rustc_hir[9aec883c6500dd0e]::hir::expect_failed::<&rustc_hir[9aec883c6500dd0e]::hir::Item>
13: 0x7ffb9e05b3af - rustc_hir_analysis[6076d6bf04e2ff00]::collect::impl_trait_header
14: 0x7ffb9dbbb08e - rustc_query_impl[3b3b44733218f5bc]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3b3b44733218f5bc]::query_impl::impl_trait_header::dynamic_query::{closure#2}::{closure#0}, rustc_middle[99c6bed041f57f4c]::query::erase::Erased<[u8; 24usize]>>
15: 0x7ffb9dbba4b6 - rustc_query_system[52f0d79d68908be]::query::plumbing::try_execute_query::<rustc_query_impl[3b3b44733218f5bc]::DynamicConfig<rustc_query_system[52f0d79d68908be]::query::caches::DefIdCache<rustc_middle[99c6bed041f57f4c]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[3b3b44733218f5bc]::plumbing::QueryCtxt, false>
16: 0x7ffb9dbba1a8 - rustc_query_impl[3b3b44733218f5bc]::query_impl::impl_trait_header::get_query_non_incr::__rust_end_short_backtrace
17: 0x7ffb9dba8d31 - <rustc_middle[99c6bed041f57f4c]::ty::context::TyCtxt>::trait_id_of_impl
18: 0x7ffb9d017f3d - <rustc_trait_selection[921af4f56dbc3556]::error_reporting::traits::on_unimplemented::OnUnimplementedFormatString>::try_parse
19: 0x7ffb9d016c8c - <rustc_trait_selection[921af4f56dbc3556]::error_reporting::traits::on_unimplemented::OnUnimplementedDirective>::parse
20: 0x7ffb9bc3ddc2 - <rustc_trait_selection[921af4f56dbc3556]::error_reporting::traits::on_unimplemented::OnUnimplementedDirective>::parse_attribute
21: 0x7ffb9dc66e9e - <rustc_trait_selection[921af4f56dbc3556]::error_reporting::traits::on_unimplemented::OnUnimplementedDirective>::of_item
22: 0x7ffb9d0153af - <rustc_trait_selection[921af4f56dbc3556]::error_reporting::TypeErrCtxt>::on_unimplemented_note
23: 0x7ffb9cffe812 - <rustc_trait_selection[921af4f56dbc3556]::error_reporting::TypeErrCtxt>::report_selection_error
24: 0x7ffb9d07c0cb - <rustc_trait_selection[921af4f56dbc3556]::error_reporting::TypeErrCtxt>::report_fulfillment_error
25: 0x7ffb9d0444c7 - <rustc_trait_selection[921af4f56dbc3556]::error_reporting::TypeErrCtxt>::report_fulfillment_errors
26: 0x7ffb9a56e063 - <rustc_hir_typeck[30566531c8d8333f]::fn_ctxt::FnCtxt>::confirm_builtin_call
27: 0x7ffb9e4ae7b9 - <rustc_hir_typeck[30566531c8d8333f]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
28: 0x7ffb9e4a81fe - <rustc_hir_typeck[30566531c8d8333f]::fn_ctxt::FnCtxt>::check_block_with_expected
29: 0x7ffb9e4af01d - <rustc_hir_typeck[30566531c8d8333f]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
30: 0x7ffb9d9f4b15 - rustc_hir_typeck[30566531c8d8333f]::check::check_fn
31: 0x7ffb9dcef2fe - rustc_hir_typeck[30566531c8d8333f]::typeck
32: 0x7ffb9dceed49 - rustc_query_impl[3b3b44733218f5bc]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3b3b44733218f5bc]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[99c6bed041f57f4c]::query::erase::Erased<[u8; 8usize]>>
33: 0x7ffb9dce16fd - rustc_query_system[52f0d79d68908be]::query::plumbing::try_execute_query::<rustc_query_impl[3b3b44733218f5bc]::DynamicConfig<rustc_query_system[52f0d79d68908be]::query::caches::VecCache<rustc_span[d9e8be537ef50d77]::def_id::LocalDefId, rustc_middle[99c6bed041f57f4c]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[3b3b44733218f5bc]::plumbing::QueryCtxt, false>
34: 0x7ffb9dce044d - rustc_query_impl[3b3b44733218f5bc]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
35: 0x7ffb9dce00c7 - <rustc_middle[99c6bed041f57f4c]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[6076d6bf04e2ff00]::check_crate::{closure#4}>::{closure#0}
36: 0x7ffb9dcddf6e - rustc_hir_analysis[6076d6bf04e2ff00]::check_crate
37: 0x7ffb9dcda8c5 - rustc_interface[7c2e5611e1eae4f6]::passes::run_required_analyses
38: 0x7ffb9e504ade - rustc_interface[7c2e5611e1eae4f6]::passes::analysis
39: 0x7ffb9e504ab1 - rustc_query_impl[3b3b44733218f5bc]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3b3b44733218f5bc]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[99c6bed041f57f4c]::query::erase::Erased<[u8; 1usize]>>
40: 0x7ffb9e7eb06e - rustc_query_system[52f0d79d68908be]::query::plumbing::try_execute_query::<rustc_query_impl[3b3b44733218f5bc]::DynamicConfig<rustc_query_system[52f0d79d68908be]::query::caches::SingleCache<rustc_middle[99c6bed041f57f4c]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[3b3b44733218f5bc]::plumbing::QueryCtxt, false>
41: 0x7ffb9e7eadcf - rustc_query_impl[3b3b44733218f5bc]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
42: 0x7ffb9e65c25e - rustc_interface[7c2e5611e1eae4f6]::interface::run_compiler::<core[60b858db6614a1bf]::result::Result<(), rustc_span[d9e8be537ef50d77]::ErrorGuaranteed>, rustc_driver_impl[2bd8fe8419d4e53d]::run_compiler::{closure#0}>::{closure#1}
43: 0x7ffb9e714a10 - std[ee7033f0ed262c3b]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[7c2e5611e1eae4f6]::util::run_in_thread_with_globals<rustc_interface[7c2e5611e1eae4f6]::util::run_in_thread_pool_with_globals<rustc_interface[7c2e5611e1eae4f6]::interface::run_compiler<core[60b858db6614a1bf]::result::Result<(), rustc_span[d9e8be537ef50d77]::ErrorGuaranteed>, rustc_driver_impl[2bd8fe8419d4e53d]::run_compiler::{closure#0}>::{closure#1}, core[60b858db6614a1bf]::result::Result<(), rustc_span[d9e8be537ef50d77]::ErrorGuaranteed>>::{closure#0}, core[60b858db6614a1bf]::result::Result<(), rustc_span[d9e8be537ef50d77]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[60b858db6614a1bf]::result::Result<(), rustc_span[d9e8be537ef50d77]::ErrorGuaranteed>>
44: 0x7ffb9e71507a - <<std[ee7033f0ed262c3b]::thread::Builder>::spawn_unchecked_<rustc_interface[7c2e5611e1eae4f6]::util::run_in_thread_with_globals<rustc_interface[7c2e5611e1eae4f6]::util::run_in_thread_pool_with_globals<rustc_interface[7c2e5611e1eae4f6]::interface::run_compiler<core[60b858db6614a1bf]::result::Result<(), rustc_span[d9e8be537ef50d77]::ErrorGuaranteed>, rustc_driver_impl[2bd8fe8419d4e53d]::run_compiler::{closure#0}>::{closure#1}, core[60b858db6614a1bf]::result::Result<(), rustc_span[d9e8be537ef50d77]::ErrorGuaranteed>>::{closure#0}, core[60b858db6614a1bf]::result::Result<(), rustc_span[d9e8be537ef50d77]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[60b858db6614a1bf]::result::Result<(), rustc_span[d9e8be537ef50d77]::ErrorGuaranteed>>::{closure#1} as core[60b858db6614a1bf]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
45: 0x7ffb9e71546b - std::sys::pal::unix::thread::Thread::new::thread_start::h172f6b61f4e28e39
46: 0x7ffb98d25609 - start_thread
47: 0x7ffb98c4a353 - clone
48: 0x0 - <unknown>
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
| I-ICE,T-compiler,C-bug,F-trait_alias,F-on_unimplemented,S-bug-has-test | low | Critical |
2,539,186,066 | vscode | Window is often its minimum size after starting VS Code | On Windows I have a setup where my left monitor is 100% scale and my right monitor is 200% scale. I frequently move windows between the two, including moving maximized windows via win+shift+left/right.
Related: https://github.com/microsoft/vscode/issues/229154 | bug,windows,multi-monitor | low | Minor |
2,539,214,166 | pytorch | [pipelining] output shape validation / lazy inference | 1) specific question about this function: is it even working for multi-stage schedules, or only single-stage schedules? what are its semantics? it needs to be updated for stages that return more than one output tensor, also
```
def _validate_stage_shapes(pipeline_stages: List[PipelineStage]):
"""
Check that the buffer shapes match between stages was expected by performing an all_gather between
all stages.
"""
```
2) what is our plan for output shape handling? We may want validation logic if users optionally pass output shapes. Or we may want to always use lazy shape inference and delete validation. Currently we only support manual shape passing?
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | oncall: distributed | low | Minor |
2,539,214,813 | godot | FileDialog unable to be interacted with when launched from PopupWindow | ### Tested versions
- Reproducible in 4.2.2 stable and 4.3 stable, untested in other versions
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - GLES3 (Compatibility) - Intel(R) UHD Graphics 620 (Intel Corporation; 25.20.100.6577) - Intel(R) Core(TM) i3-8130U CPU @ 2.20GHz (4 Threads)
### Issue description
When launched from a PopupWindow, either directly or via callback, the FileDialog window cannot be closed, moved, or meaningfully interacted with.
### Steps to reproduce
In the MRP, click the "save" button to directly launch the FileDialog window (this should work). Click the "save with popup" button to launch a PopupWindow which can launch the FileDialog. This FileDialog exhibits the strange behavior.
### Minimal reproduction project (MRP)
[FileDialogIssueMRP.zip](https://github.com/user-attachments/files/17078093/FileDialogIssueMRP.zip)
| bug,needs testing,topic:dotnet,topic:gui | low | Major |
2,539,233,977 | vscode | opening inline chat near the top of a notebook editor will push cell content above scrollable viewport |
Steps to Reproduce:
1. open a notebook with some content and scroll to the top
2. focus in the middle of the first cell and open the inline chat (ctrl + i)
:bug: part of the cell content can no longer be scrolled to

| bug,notebook-layout,notebook-cell-editor | low | Critical |
2,539,277,667 | tensorflow | TensorFlow keeps creating threads when multi-GPU training (thread leak) | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.11.0
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.9.13
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
11.4
### GPU model and memory
Nvidia A800
### Current behavior?
I was using this machine to train a GPT-2 example from (the data I used can also be found in this link, also here https://github.com/chinese-poetry/chinese-poetry.git) https://keras.io/examples/generative/gpt2_text_generation_with_kerasnlp/
Before we start, I would give a baseline amount of this machine's threads :

When starting with the multi-GPU training of tensorflow by calling the tf.distribute.MirroredStrategy, the training process worked fine as usual.
But with the time went by, I found the amount of threads increased with the training process going, here is the evidence of thread increasing when processing 3354th batch (I used cat /proc/"this programs' pid"/status to check the number of threads):



Then, the evidence of thread increasing when processing 3791st batch (the amount of threads reached 22178):



When calculating the 5054th batch, the training program got an error, and I captured a count of threads before the error (achieved around 31120 threads):


I checked an similar issue in https://github.com/tensorflow/tensorflow/issues/62466, but I cannot find a solution, moreover, I have run other examples like diffusion model using this machine and the same tensorflow env with multi-GPU training, which worked fine and no any problems. So, could you please give me a help for this problem, very appreciated.
### Standalone code to reproduce the issue
```shell
Here is my code
import os
os.environ["KERAS_BACKEND"] = "tensorflow" # or "tensorflow" or "torch"
import keras_nlp
import keras
import tensorflow as tf
import time
keras.mixed_precision.set_global_policy("mixed_float16")
import os
import json
import datetime
train_ds = (
tf.data.Dataset.from_tensor_slices(paragraphs)
.batch(36)
.cache()
.prefetch(tf.data.AUTOTUNE)
)
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3", "GPU:4", "GPU:5"])
print("Number of devices: {}".format(strategy.num_replicas_in_sync))
preprocessor = keras_nlp.models.GPT2CausalLMPreprocessor.from_preset(
"gpt2_base_en",
sequence_length=128,
)
# Open a strategy scope.
with strategy.scope():
# To speed up training and generation, we use preprocessor of length 128
# instead of full length 1024.
gpt2_lm = keras_nlp.models.GPT2CausalLM.from_preset(
"gpt2_base_en", preprocessor=preprocessor
)
num_epochs = 5
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
save_weights_only=True,
monitor="accuracy",
# monitor="i_loss",
mode="min",
save_best_only=True,
save_freq="epoch"
)
learning_rate = 5e-4
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
gpt2_lm.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=loss,
weighted_metrics=["accuracy"],
)
gpt2_lm.fit(train_ds, epochs=num_epochs, callbacks=[
checkpoint_callback,
tensorboard_callback,
],
)
```
### Relevant log output
```shell
2024-09-21 00:24:25.840848: W tensorflow/compiler/tf2xla/kernels/random_ops.cc:57] Warning: Using tf.random.uniform with XLA compilation will ignore seeds; consider using tf.random.stateless_uniform instead if reproducible behavior is desired. gpt2_causal_lm/gpt2_backbone/embeddings_dropout/dropout/random_uniform/RandomUniform
2024-09-21 00:24:25.846135: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-09-21 00:24:26.559075: W tensorflow/compiler/tf2xla/kernels/assert_op.cc:38] Ignoring Assert operator sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert
2024-09-21 00:24:26.572007: W tensorflow/compiler/tf2xla/kernels/assert_op.cc:38] Ignoring Assert operator sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert
2024-09-21 00:24:26.573183: W tensorflow/compiler/tf2xla/kernels/assert_op.cc:38] Ignoring Assert operator sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert
2024-09-21 00:24:26.581128: W tensorflow/compiler/tf2xla/kernels/assert_op.cc:38] Ignoring Assert operator sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert
2024-09-21 00:24:26.581197: W tensorflow/compiler/tf2xla/kernels/assert_op.cc:38] Ignoring Assert operator sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert
2024-09-21 00:24:26.588190: W tensorflow/compiler/tf2xla/kernels/assert_op.cc:38] Ignoring Assert operator sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert
12024-09-21 00:25:01.986918: I tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:325] ptxas warning : Registers are spilled to local memory in function 'fusion_1267'
ptxas warning : Registers are spilled to local memory in function 'fusion_1225'
ptxas warning : Registers are spilled to local memory in function 'fusion_1111'
ptxas warning : Registers are spilled to local memory in function 'fusion_1220'
ptxas warning : Registers are spilled to local memory in function 'fusion_1124'
ptxas warning : Registers are spilled to local memory in function 'fusion_1175'
ptxas warning : Registers are spilled to local memory in function 'fusion_1128'
ptxas warning : Registers are spilled to local memory in function 'fusion_1006'
ptxas warning : Registers are spilled to local memory in function 'fusion_1015'
2024-09-21 00:25:02.215951: I tensorflow/compiler/jit/xla_compilation_cache.cc:477] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
2024-09-21 00:25:02.420703: I tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:325] ptxas warning : Registers are spilled to local memory in function 'fusion_1267'
ptxas warning : Registers are spilled to local memory in function 'fusion_1225'
ptxas warning : Registers are spilled to local memory in function 'fusion_1111'
ptxas warning : Registers are spilled to local memory in function 'fusion_1220'
ptxas warning : Registers are spilled to local memory in function 'fusion_1124'
ptxas warning : Registers are spilled to local memory in function 'fusion_1175'
ptxas warning : Registers are spilled to local memory in function 'fusion_1128'
ptxas warning : Registers are spilled to local memory in function 'fusion_1006'
ptxas warning : Registers are spilled to local memory in function 'fusion_1015'
2024-09-21 00:25:02.542130: I tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:325] ptxas warning : Registers are spilled to local memory in function 'fusion_1267'
ptxas warning : Registers are spilled to local memory in function 'fusion_1225'
ptxas warning : Registers are spilled to local memory in function 'fusion_1111'
ptxas warning : Registers are spilled to local memory in function 'fusion_1220'
ptxas warning : Registers are spilled to local memory in function 'fusion_1124'
ptxas warning : Registers are spilled to local memory in function 'fusion_1175'
ptxas warning : Registers are spilled to local memory in function 'fusion_1128'
ptxas warning : Registers are spilled to local memory in function 'fusion_1006'
ptxas warning : Registers are spilled to local memory in function 'fusion_1015'
2024-09-21 00:25:03.595774: I tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:325] ptxas warning : Registers are spilled to local memory in function 'fusion_1267'
ptxas warning : Registers are spilled to local memory in function 'fusion_1225'
ptxas warning : Registers are spilled to local memory in function 'fusion_1111'
ptxas warning : Registers are spilled to local memory in function 'fusion_1220'
ptxas warning : Registers are spilled to local memory in function 'fusion_1124'
ptxas warning : Registers are spilled to local memory in function 'fusion_1175'
ptxas warning : Registers are spilled to local memory in function 'fusion_1128'
ptxas warning : Registers are spilled to local memory in function 'fusion_1006'
ptxas warning : Registers are spilled to local memory in function 'fusion_1015'
12024-09-21 00:25:04.071526: I tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:325] ptxas warning : Registers are spilled to local memory in function 'fusion_1267'
ptxas warning : Registers are spilled to local memory in function 'fusion_1225'
ptxas warning : Registers are spilled to local memory in function 'fusion_1111'
ptxas warning : Registers are spilled to local memory in function 'fusion_1220'
ptxas warning : Registers are spilled to local memory in function 'fusion_1124'
ptxas warning : Registers are spilled to local memory in function 'fusion_1175'
ptxas warning : Registers are spilled to local memory in function 'fusion_1128'
ptxas warning : Registers are spilled to local memory in function 'fusion_1006'
ptxas warning : Registers are spilled to local memory in function 'fusion_1015'
2024-09-21 00:25:04.130504: I tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:325] ptxas warning : Registers are spilled to local memory in function 'fusion_1267'
ptxas warning : Registers are spilled to local memory in function 'fusion_1225'
ptxas warning : Registers are spilled to local memory in function 'fusion_1111'
ptxas warning : Registers are spilled to local memory in function 'fusion_1220'
ptxas warning : Registers are spilled to local memory in function 'fusion_1124'
ptxas warning : Registers are spilled to local memory in function 'fusion_1175'
ptxas warning : Registers are spilled to local memory in function 'fusion_1128'
ptxas warning : Registers are spilled to local memory in function 'fusion_1006'
ptxas warning : Registers are spilled to local memory in function 'fusion_1015'
2024-09-21 00:25:05.333297: I tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:630] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
5054/8663 [================>.............] - ETA: 9:14 - loss: 11.8715 - accuracy: 2.2702terminate called after throwing an instance of 'std::system_error'
terminate called recursively
what(): Resource temporarily unavailable
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:dist-strat,TF 2.11 | medium | Critical |
2,539,284,678 | go | x/tools/go/ssa: stop using deprecated x/tools/go/loader in unit tests | ```
ssa - @timothy-king
go/ssa/ssautil/switch_test.go
go/ssa/ssautil/load.go
go/ssa/example_test.go
go/ssa/interp/interp_test.go
go/ssa/builder_generic_test.go
go/ssa/builder_test.go
refactor/rename - these are deprecated anyway - see https://github.com/golang/go/issues/69538
refactor/rename/rename.go
refactor/rename/mvpkg.go
refactor/rename/spec.go
refactor/rename/check.go
eg - in progress https://go.dev/cl/616215
refactor/eg/eg_test.go
gcimporter - in progress https://go.dev/cl/614676
internal/gcimporter/iexport_test.go
objectpath - in progress https://go.dev/cl/614678
go/types/objectpath/objectpath_test.go
go/types/objectpath/objectpath_go118_test.go
callgraph - https://go.dev/cl/614679
go/callgraph/cha/cha_test.go
go/callgraph/vta/helpers_test.go
go/callgraph/callgraph_test.go
go/callgraph/static/static_test.go
``` | NeedsFix,Tools,Analysis,FixPending | medium | Major |
2,539,318,885 | PowerToys | Crop And Lock | ### Description of the new feature / enhancement
Hello,
I would like to suggest an improvement to the Crop and Lock feature. Currently, when I crop a window, the cropped section appears in a new window. However, I would prefer that the cropped area remains within the original window instead of generating a new one. To control this feature, an extra button could be added to the top bar of the window, allowing users to easily toggle the crop functionality on or off.
### Scenario when this would be used?
This feature would be particularly useful for users who work on a single screen and want to minimize distractions by focusing on a specific part of a window. The current functionality is likely designed with dual-screen or presentation setups in mind, but many users aim to improve their workflow by focusing on one screen while clearing away unnecessary elements. The ability to manage everything within one window would make the experience smoother and more efficient.
### Supporting information
If the option to keep the cropped area within the original window is not feasible or too complex, an alternative improvement could be to ensure that the second window is not just a visual display. The cropped section should remain functional, allowing users to interact with that area. For example, in the cropped section, users should be able to start or pause a video, or scroll through messages in a form. This would provide both a visual and functional experience for users. | Needs-Triage | low | Major |
2,539,502,648 | pytorch | Runbook for GCP A100 runners should be updated | ### 🐛 Describe the bug
Right now there isn't anything in runbook about GCP runners and no alerts for it either
Current queue for linux.gcp.a100 is over a day and there are 0 runners with that label available although machines are up and running

### Versions
Infra
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim | high priority,triaged,module: infra | low | Critical |
2,539,550,462 | fastapi | Sponsor Badge CSS overflow issue on the docs | ### Discussed in https://github.com/fastapi/fastapi/discussions/12218
<div type='discussions-op-text'>
<sup>Originally posted by **nat236919** September 19, 2024</sup>
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
announce-wrapper .sponsor-badge {
display: block;
position: absolute;
top: -10px;
right: 0;
font-size: 0.5rem;
color: #999;
background-color: #666;
border-radius: 10px;
padding: 0 10px;
z-index: 10;
}
```
### Description

Due to its absolute position and overflow setting, the text is trailing down creating an expected scroll. I think we can simply solve the issue by removing **position: absolute;**
### Operating System
Windows
### Operating System Details
_No response_
### FastAPI Version
NA
### Pydantic Version
NA
### Python Version
NA
### Additional Context
_No response_</div> | question | low | Minor |
2,539,556,953 | fastapi | Regression between 0.113.0 and 0.114.0: OAuth2PasswordRequestForm used to accept grant_type="" | ### Discussed in https://github.com/fastapi/fastapi/discussions/12182
<div type='discussions-op-text'>
<sup>Originally posted by **rbubley** September 10, 2024</sup>
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
# This is just the code from https://fastapi.tiangolo.com/tutorial/security/oauth2-jwt/#update-the-dependencies
from datetime import datetime, timedelta, timezone
from typing import Annotated
import jwt
from fastapi import Depends, FastAPI, HTTPException, status
from fastapi.security import OAuth2PasswordBearer, OAuth2PasswordRequestForm
from jwt.exceptions import InvalidTokenError
from passlib.context import CryptContext
from pydantic import BaseModel
# to get a string like this run:
# openssl rand -hex 32
SECRET_KEY = "09d25e094faa6ca2556c818166b7a9563b93f7099f6f0f4caa6cf63b88e8d3e7"
ALGORITHM = "HS256"
ACCESS_TOKEN_EXPIRE_MINUTES = 30
fake_users_db = {
"johndoe": {
"username": "johndoe",
"full_name": "John Doe",
"email": "johndoe@example.com",
"hashed_password": "$2b$12$EixZaYVK1fsbw1ZfbX3OXePaWxn96p36WQoeG6Lruj3vjPGga31lW",
"disabled": False,
}
}
class Token(BaseModel):
access_token: str
token_type: str
class TokenData(BaseModel):
username: str | None = None
class User(BaseModel):
username: str
email: str | None = None
full_name: str | None = None
disabled: bool | None = None
class UserInDB(User):
hashed_password: str
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
app = FastAPI()
def verify_password(plain_password, hashed_password):
return pwd_context.verify(plain_password, hashed_password)
def get_password_hash(password):
return pwd_context.hash(password)
def get_user(db, username: str):
if username in db:
user_dict = db[username]
return UserInDB(**user_dict)
def authenticate_user(fake_db, username: str, password: str):
user = get_user(fake_db, username)
if not user:
return False
if not verify_password(password, user.hashed_password):
return False
return user
def create_access_token(data: dict, expires_delta: timedelta | None = None):
to_encode = data.copy()
if expires_delta:
expire = datetime.now(timezone.utc) + expires_delta
else:
expire = datetime.now(timezone.utc) + timedelta(minutes=15)
to_encode.update({"exp": expire})
encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt
async def get_current_user(token: Annotated[str, Depends(oauth2_scheme)]):
credentials_exception = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
username: str = payload.get("sub")
if username is None:
raise credentials_exception
token_data = TokenData(username=username)
except InvalidTokenError:
raise credentials_exception
user = get_user(fake_users_db, username=token_data.username)
if user is None:
raise credentials_exception
return user
async def get_current_active_user(
current_user: Annotated[User, Depends(get_current_user)],
):
if current_user.disabled:
raise HTTPException(status_code=400, detail="Inactive user")
return current_user
@app.post("/token")
async def login_for_access_token(
form_data: Annotated[OAuth2PasswordRequestForm, Depends()],
) -> Token:
user = authenticate_user(fake_users_db, form_data.username, form_data.password)
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password",
headers={"WWW-Authenticate": "Bearer"},
)
access_token_expires = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
access_token = create_access_token(
data={"sub": user.username}, expires_delta=access_token_expires
)
return Token(access_token=access_token, token_type="bearer")
@app.get("/users/me/", response_model=User)
async def read_users_me(
current_user: Annotated[User, Depends(get_current_active_user)],
):
return current_user
@app.get("/users/me/items/")
async def read_own_items(
current_user: Annotated[User, Depends(get_current_active_user)],
):
return [{"item_id": "Foo", "owner": current_user.username}]
```
### Description
* Open the browser, and navigate to to /token endpoint under the swagger interface.
* delete "password" from grant_access, to leave an empty string
* Put in the usual `johndoe` and `secret` and username and password.
Under 0.113.0 this worked and returned a token.
Under 0.114.0 it returns an error message: a 422 error, and "String should match pattern 'password'"
I expect it to work, because the docs for OAuth2PasswordRequestForm say (/fastapi/security/outh2.py lines 69-72):
```
The OAuth2 spec says it is required and MUST be the fixed string
"password". Nevertheless, this dependency class is permissive and
allows not passing it. If you want to enforce it, use instead the
`OAuth2PasswordRequestFormStrict` dependency.
```
### Operating System
macOS
### Operating System Details
_No response_
### FastAPI Version
0.114.0
### Pydantic Version
2.8.2
### Python Version
3.12.5
### Additional Context
_No response_</div> | question | low | Critical |
2,539,560,433 | deno | node:child_process.kill(0) throws | It should just do a `kill -0`, which just does error checking for the PID and doesn't actually send a signal. From https://man7.org/linux/man-pages/man2/kill.2.html:
> If sig is 0, then no signal is sent, but existence and permission
> checks are still performed; this can be used to check for the
> existence of a process ID or process group ID that the caller is
> permitted to signal.
This issue causes `nuxt dev` to error on exit:
```
ERROR [unhandledRejection] Unknown signal: 0 12:33:59 PM
at toDenoSignal (ext:deno_node/internal/child_process.ts:387:11)
at ChildProcess.kill (ext:deno_node/internal/child_process.ts:296:53)
at kill (node_modules/nuxi/dist/chunks/dev.mjs:246:17)
at Process.<anonymous> (node_modules/nuxi/dist/chunks/dev.mjs:291:7)
at Object.onceWrapper (ext:deno_node/_events.mjs:518:26)
at Process.emit (ext:deno_node/_events.mjs:405:35)
at Process.emit (node:process:416:40)
at Process.exit (node:process:61:13)
at Process.<anonymous> (node_modules/nuxi/dist/chunks/index2.mjs:29886:42)
at Object.onceWrapper (ext:deno_node/_events.mjs:516:28)
```
| bug,node compat | low | Critical |
2,539,562,356 | go | proposal: container/unordered: a generic hash table with custom hash function and equivalence relation | **Background:** In https://github.com/golang/go/issues/69420 I proposed to promote the [golang.org/x/tools/go/types/typeutil.Map](https://pkg.go.dev/golang.org/x/tools/go/types/typeutil#Map) data type to the go/types package, modernizing it with generics. @jimmyfrasche pointed out that really the only part of that proposal that needs to be in go/types is the `types.Hash` function, which should have semantics consistent with `types.Identical`: that is, identical types must have equal hashes. So, I reduced that proposal to just the hash function, and this proposal governs the table.
**Proposal:** the standard library should provide a generic hash table that allows the user to specify the hash function and equivalence relation. Here is a starting point for the API:
```go
package unordered // "container/unordered"
import "iter"
// Map[K, V] is a mapping from keys of type K to values of type V.
// Map keys are considered equivalent if they are.
type Map[K, V any] struct { ... }
// NewMap returns a new mapping.
// Keys k1, k2 are considered equal if eq(k1, k2); in that case hash(k1) must equal hash(k2).
type NewMap[K, V any](hash func(K) uint64, eq func(K, K) bool) *Map[K, V]
// All returns an iterator over the key/value entries of the map in undefined order.
func (m *Map[K, V]) All() iter.Seq2[K, V]
// At returns the map entry for the given key. It returns zero if the entry is not present.
func (m *Map[K, V]) At(key K) V
// Delete removes th//e entry with the given key, if any. It returns true if the entry was found.
func (m *Map[K, V]) Delete(key K) bool
// Keys returns an iterator over the map keys in unspecified order.
func (m *Map[K, V]) Keys() iter.Seq[K]
// Values returns an iterator over the map values in unspecified order.
func (m *Map[K, V]) Values() iter.Seq[V]
// Len returns the number of map entries.
func (m *Map[K, V]) Len() int
// Set updates the map entry for key to value, and returns the previous entry, if any.
func (m *Map[K, V]) Set(key K, value V) (prev V)
// String returns a string representation of the map's entries in unspecified order.
// Values are printed as if by fmt.Sprint.
func (m *Map[K, V]) String() string
```
Open questions:
- Should At and Set distinguish the "present but zero" and "missing" cases?
- Should we defend against hash flooding, complicating the hash function?
- Should we guarantee randomized iteration order?
Related:
- https://github.com/golang/go/issues/69420
- https://github.com/golang/go/issues/60630, its ordered cousin. The APIs should harmonize.
- https://github.com/golang/go/issues/69230, a generic set type. Again the APIs should harmonize. | Proposal,Proposal-Hold | medium | Critical |
2,539,590,701 | pytorch | [pipelining] Improve recv buffer management | First, in #136243 it is annoying that I have to pass n_microbatches to prepare_forward/backward, since that value is only known by the Schedule class but I logically want to prepare the recv buffers lazily / later on inside stage._forward/backward APIs.
Second, it's not clear to me that we want to allocate N buffers when we are using an N microbatch schedule. Better, couldn't we keep a maximum of Y buffers allocated, where Y <= N is the number of recvs that are active at a time in the schedule? When we complete a recv and wait it, we could restore the buffer to the queue and then reuse it when receiving for another microbatch. We'd store the in-use buffers in a dict, keyed by the active microbatch / chunk ID, and then move them from the dict into a queue of inactive buffers once they are out of use.
If we do this, then I think I can kill 2 birds- better peak memory, and delete the copypasta 'prepare_*_infra' calls from all the schedule classes.
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | oncall: distributed | low | Minor |
2,539,611,793 | material-ui | [Autocomplete] dropdown/popper fails to render when wrapped with a ThemeProvider setting default input slotProps on TextField. | ### Steps to reproduce
Steps:
1. Wrap `Autocomplete` with a `ThemeProvider` with the following theme defined:
```
let theme = createTheme({
components: {
MuiTextField: {
defaultProps: {
slotProps: {
input: {
sx: {
margin: "10px",
},
}
},
},
},
},
});
<ThemeProvider theme={theme}>
<Autocomplete
disablePortal
options={['hello']}
renderInput={(params) => (
<TextField
{...params}
label="type something"
/>
)}
/>
</ThemeProvider>
```
2. Click on the input for Autocomplete and no dropdown/popper is displayed with the provided options.
### Current behavior
No dropdown/popper is displayed with the provided options. No runtime errors are incurred either.
### Expected behavior
Able to style `Autocomplete` `TextField` input at the theme level without negatively affecting its popper/dropdown from displaying.
### Context
I want the `Autocomplete` `TextField` input to be styled like every other `TextField` input within my application by using a given `ThemeProvider` to do so.
A similar issue was also encountered with [`DatePicker`](https://github.com/mui/mui-x/issues/14684).
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: Linux 5.15 Debian GNU/Linux 12 (bookworm) 12 (bookworm)
Binaries:
Node: 22.9.0 - /usr/local/bin/node
npm: 10.8.3 - /usr/local/bin/npm
pnpm: Not Found
Browsers:
Microsoft Edge
Version 129.0.2792.52 (Official build) (arm64)
Google Chrome
Version 128.0.6613.139 (Official Build) (arm64)
npmPackages:
@emotion/react: ^11.13.0 => 11.13.3
@emotion/styled: ^11.13.0 => 11.13.0
@mui/core-downloads-tracker: 6.1.1
@mui/icons-material: ^6.1.1 => 6.1.1
@mui/material: ^6.1.1 => 6.1.1
@mui/private-theming: 6.1.1
@mui/styled-engine: 6.1.1
@mui/system: 6.1.1
@mui/types: 7.2.17
@mui/utils: 6.1.1
@mui/x-date-pickers: ^7.18.0 => 7.18.0
@mui/x-internals: 7.18.0
@types/react: 18.3.8
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
```
</details>
**Search keywords**: Autocomplete popper ThemeProvider theme input slotProps | package: material-ui,component: autocomplete | low | Critical |
2,539,620,595 | vscode | "Ghost" entry created when switching between "new file" and "new folder" in Explorer pane with compact folders enabled | Type: <b>Bug</b>
When attempting to switch between "New file" and "New folder" (as in first clicking new file then new folder or viceversa) with the "compact folders" option on, a "ghost" entry is created. This entry does not exist in the file system and disappears when refreshing the explorer pane.
Steps to reproduce:
1. Create a new parent folder and add another folder as a child. This way, with "compact folders", it should look something like this: `parent/child`.
2. Select the `parent` folder and click on "New File"
3. Now switch to a folder by clicking "New Folder"
4. A new entry should be visible (which is the bug in question)
I will attach a video demonstrating the issue below.

VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz (4 x 2304)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.87GB (3.74GB free)|
|Process Argv|--folder-uri file:///c%3A/Users/Micro/Code/Web/lilac-lms --crash-reporter-id cc3cc104-37cd-4fd4-bbb3-0a16175e626f|
|Screen Reader|no|
|VM|0%|
</details>Extensions disabled<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
f3je6385:31013174
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31134774
wkspc-onlycs-t:31132770
nativeloc1:31134641
wkspc-ranged-c:31125598
ei213698:31121563
iacca2:31138163
cc771715:31140165
```
</details>
<!-- generated by issue reporter --> | bug,file-explorer,confirmed | low | Critical |
2,539,642,252 | pytorch | [Meta] torch.export readiness for Recommendation models | ### 🚀 The feature, motivation and pitch
Currently, Meta's internal recommendation models leverage torch.fx + torch.jit.script to capture the graph from Python program. Since TorchScript is not actively developed anymore, we would like to leverage torch.export to achieve the graph capture, i.e., deprecating TorchScript usage within Meta's recommenders. However, there is still feature gap. We would like to have the following features:
1. dynamic tensor, for things like relaxing the shape propagation requirement
2. unflatten, to resume the exported program (torch.export result) to an eager mode model.
3. recursive / partial export, like first export submodules separately, and then export the whole model contains / invokes all the submodules. Sometimes, we may want to skip export some part of the model and have some special handling.
### Alternatives
Live with TorchScript forever (unable migrating away from TorchScript).
### Additional context
We would like to explore the success rate of torch.fx.trace + torch.export, so fx.trace will clean the things first, which removes some barriers for torch.export.
We also need to re-write the program to implement the control flow with torch.cond.
cc @ezyang @chauhang @penguinwu | triaged,oncall: pt2 | low | Minor |
2,539,656,618 | go | proposal: context: add iterator for values | ### Proposal Details
I could have sworn this was proposed somewhere before, but I looked all over the place and couldn't find it. If it's a duplicate, sorry.
I propose adding a function along the lines of
```go
func AllValues(ctx context.Context) iter.Seq2[any, any]
```
to the `context` package that would yield all values accessible via `ctx.Value()`. | Proposal | low | Minor |
2,539,661,811 | vscode | image attachment content part cleanup | I guess you didn't start this trend, but we shouldn't be copying/pasting code between the two places that attachments are rendered
_Originally posted by @roblourens in https://github.com/microsoft/vscode/pull/228706#discussion_r1769209469_
| bug,panel-chat | low | Minor |
2,539,672,463 | rust | rustdoc should say if a feature is available by default | when it has a snippet like this under an item:
> Available on crate features fmt and std only.
there should be an additional (enabled by default) note if they're enabled by default. | T-rustdoc,C-enhancement,A-rustdoc-ui,F-doc_cfg,T-rustdoc-frontend | low | Minor |
2,539,684,893 | react-native | Adding onViewableItemsChanged to SectionList changes the items passed to each section's keyExtractor | ### Description
We wrote a SectionList component where each section defines it's own keyExtractor. One of the sections uses a key from a nested object in each item in the section. It broke when we wanted to add visibility tracking to the section list, and it seems like the section objects are being run through the keyExtractor but only when `onViewableItemsChanged` prop is provided.
I expect that only items within each section should be run through the section's keyExtractor function, and that the behavior is consistent regardless of what props are provided.
### Steps to reproduce
1. Open expo snack linked below
2. Run it on android or iOS device and observe it works
3. Uncomment the `onViewableItemsChanged` line from the SectionList and observe the break
### React Native Version
0.75.3
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
See expo snack for reproduction
```
### Stacktrace or Logs
```text
Cannot read property 'id' of undefined
```
### Reproducer
https://snack.expo.dev/V1Ad0VTld860Zdt8n-qb7
### Screenshots and Videos
<img width="1725" alt="Screenshot 2024-09-20 at 5 07 04 PM" src="https://github.com/user-attachments/assets/950e7fd0-b702-4cf4-957a-b8f81ef770b8">
| Issue: Author Provided Repro,Component: SectionList | low | Minor |
2,539,696,720 | ollama | Model request: Ovis1.6-Gemma2-9B small vision model | Performance: With just 10B parameters, [Ovis1.6-Gemma2-9B](https://huggingface.co/AIDC-AI/Ovis1.6-Gemma2-9B) leads the OpenCompass benchmark among open-source MLLMs within 30B parameters.

| model request | low | Major |
2,539,697,251 | PowerToys | Custom FancyZones Deleted | ### Microsoft PowerToys version
0.84.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
FancyZones
### Steps to reproduce
Updated the app and my custom FancyZones layout got cleared
### ✔️ Expected Behavior
I was expecting my custom FancyZones layout to persist
### ❌ Actual Behavior
My custom FancyZones layout got cleared
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,539,702,050 | rust | match &[first, ..more] leads down a suggestion garden path | ### Code
```
struct Algorithms<'a>(&'a Vec<String>);
impl<'a> fmt::Display for Algorithms<'a> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match &self.0[..] {
&[] => f.write_str("<default>"),
&[one] => f.write_str(&one),
&[first, ..more] => {
f.write_str(&first)?;
for rec in more {
write!(f, ", then {}", rec)?;
}
Ok(())
},
}
}
}
```
### Current output
```
Compiling zram-generator v1.1.2 (/home/nabijaczleweli/code/systemd-zram-generator)
error[E0425]: cannot find value `more` in this scope
--> src/config.rs:205:24
|
205 | &[first, ..more] => {
| ^^^^ not found in this scope
|
help: if you meant to collect the rest of the slice in `more`, use the at operator
|
205 | &[first, more @ ..] => {
| ~~~~~~~~~
```
okay so I do that and get
```
Compiling zram-generator v1.1.2 (/home/nabijaczleweli/code/systemd-zram-generator)
error[E0277]: the size for values of type `[String]` cannot be known at compilation time
--> src/config.rs:205:22
|
205 | &[first, more @ ..] => {
| ^^^^^^^^^ doesn't have a size known at compile-time
|
= help: the trait `Sized` is not implemented for `[String]`
= note: all local variables must have a statically known size
= help: unsized locals are gated as an unstable feature
```
..? definitely sub-optimal; the first thing I tried was
```
error: `..` patterns are not allowed here
--> src/config.rs:205:30
|
205 | &[first, &more @ ..] => {
| ^^
|
= note: only allowed in tuple, tuple struct, and slice patterns
```
so I did `&[first, ref more @ ..]` which worked
### Desired output
idk. probably suggest with the `ref` if appropriate?
### Rationale and extra context
_No response_
### Other cases
_No response_
### Rust Version
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-unknown-linux-gnu
release: 1.80.1
LLVM version: 18.1.7
### Anything else?
_No response_ | A-diagnostics,A-resolve,T-compiler,A-suggestion-diagnostics,D-papercut,D-invalid-suggestion,A-patterns | low | Critical |
2,539,702,896 | godot | Mac Safari accent-picking window can still be visible after hiding an edited LineEdit | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - macOS 14.6.1 - GLES3 (Compatibility) - Apple M2 Pro - Apple M2 Pro (10 Threads)
### Issue description
It is possible to open the Mac accent menu by pressing and holding a key like A, E, etc while editing text. If you edit text in a LineEdit, and then hide that LineEdit (`.visible = false`) and hold down A, E despite not being focused on any control, the Mac accent window will pop up again.
<img width="714" alt="Screenshot 2024-09-20 at 3 21 46 PM" src="https://github.com/user-attachments/assets/2cb52669-03bf-4851-8b11-86c09deee413">
This is noticeable with games that are using WASD for controls. It only seems to affect the web exported version of the project; macOS desktop does not seem to have the issue.
`gui_get_focus_owner()` reports that there is a null focus when this is happening.
Our current workaround is to make the scene root "`grab_focus()`" – this seems to trick the system into believing text input is no longer happening.
Thank you for your time!
### Steps to reproduce
1. Click on a `LineEdit` control and type anything. You do not have to hold down an accent key.
2. Hide the `LineEdit` control, e.g. by clicking the hide/show button in the example project.
3. Hold down A, E, etc and watch the menu appear even though you are not editing text.
Web version can be tested here: https://clinquant-biscotti-90bb69.netlify.app/
### Minimal reproduction project (MRP)
Web version can be tested here: https://clinquant-biscotti-90bb69.netlify.app/
[text_edit_web_bug_repro.zip](https://github.com/user-attachments/files/17080923/text_edit_web_bug_repro.zip)
| bug,platform:macos,topic:porting,topic:input | low | Critical |
2,539,717,613 | deno | `nuxt preview` falls in latest canary | Repro:
```
❯ deno run -A npm:nuxi init
# just press enter at each prompt
❯ cd nuxt-app
❯ deno task build
❯ deno task preview
ERROR The "nodePath" option must be a string or a file URL: /home/.deno/bin/deno. 2:34:33 PM
at safeNormalizeFileUrl (node_modules/nuxi/dist/chunks/index3.mjs:36:9)
at handleNodeOption (node_modules/nuxi/dist/chunks/index3.mjs:2473:29)
at normalizeOptions (node_modules/nuxi/dist/chunks/index3.mjs:2630:64)
at handleAsyncArguments (node_modules/nuxi/dist/chunks/index3.mjs:7642:63)
at execaCoreAsync (node_modules/nuxi/dist/chunks/index3.mjs:7617:110)
at callBoundExeca (node_modules/nuxi/dist/chunks/index3.mjs:7844:5)
at boundExeca (node_modules/nuxi/dist/chunks/index3.mjs:7815:44)
at Object.run (node_modules/nuxi/dist/chunks/preview.mjs:132:11)
at eventLoopTick (ext:core/01_core.js:175:7)
at async runCommand$1 (node_modules/nuxi/dist/shared/nuxi.b8b195e1.mjs:1648:16)
ERROR The "nodePath" option must be a string or a file URL: /home/.deno/bin/deno
```
---
I looked into this. The issue is that the named export of `execPath` from `node:process` is not actually a string, but an object that tries to act like a string: https://github.com/denoland/deno/blob/55c22ee1bd8e5b108b8b13517150c3cfadf4d7f9/ext/node/polyfills/process.ts#L344-L357 | bug,node compat | low | Critical |
2,539,723,567 | rust | Compiler freezes on trait with generic associated type | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
trait What {
type This<'a>: 'a
where
for<'b> Self::This<'b>: 'a + 'b;
}
impl<T> What for T {
type This<'a> = T where T: 'a;
}
```
I expected to see this happen: The compiler returning with a success (probably not) or a compilation error
Instead, this happened: Compiler freezes, cpu usage shows one core of 100% usage (infinite loop?)
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
I tried some versions between GATs stabilizing and current nightly, and all had the same behavior.
This could be a duplicate of existing similar issues, but I am not sure if that is the case. | A-lifetimes,A-trait-system,A-associated-items,T-compiler,C-bug,I-hang,fixed-by-next-solver,A-GATs | low | Critical |
2,539,730,089 | deno | Dynamic Uncached Import (Feature Request) | Starting from this issue here https://github.com/denoland/deno/issues/25742
I understood that there is a need in a `dynamicUncachedImport` util
After long discussions in that issue, seems that it should be a Feature request and not a Issue since it works the same on all 3 platforms Node, Deno, Bun
But it would be beneficial for Deno to have it firstly
### What would do `dynamicUncachedImport`
It will import the given module in a uncached manier, ES can do that if you add a `? query string` at the end of module name, but it uncache only first level of import not nested imports inside, which may lead to bugs and is just a partiall `uncache` solution, it uncache only first level import not it's children imports
### Where this may be used
Having such a util, we basically can implement HMR in Deno, without reloading the running process, I know Deno has `--watch-hmr` but most of the times it just reloads the process, and is not a programaticall thing you can't really controll it
Having it, we may build tools that allows to install plugins at runtime, wordpress like tools allowing to install themes at runtime, do server side rendering at runtime with changed JSX files
Basically What PHP world can do)) and that's why 70% of web still runs on it) because you have tools like wordpress that are famous for these features and you can change everything practically at runtime even code))
____
Actually the import should be cached but once a new Dynamic import happens with exact same path/name it should get the new version not one from cache | suggestion | low | Critical |
2,539,742,106 | terminal | [Terminal Chat] Copying a suggestion with multiple lines is pasted improperly | Let's say I get a suggestion like this:

That results in this error when the copy button is pressed:

Looks like we need to change how it's copied over to the terminal. | Issue-Bug,Product-Terminal,Needs-Tag-Fix,Area-Chat | low | Critical |
2,539,746,109 | terminal | [Terminal Chat] No way to copy text from chat history | The chat bubbles in the chat are not selectable! Feels _weird_ imo.
I think this is a relatively easy fix? This property on the text blocks should help: https://learn.microsoft.com/en-us/uwp/api/windows.ui.xaml.controls.textblock.istextselectionenabled?view=winrt-26100#windows-ui-xaml-controls-textblock-istextselectionenabled | Issue-Bug,Product-Terminal,Needs-Tag-Fix,Area-Chat | low | Minor |
2,539,765,874 | flutter | Incompatibility with `FlutterAppDelegate` and Apple's Silent Push (`content-available:true`) | ### Steps to reproduce
1. Create a new Flutter app via `flutter create` that by default creates a Flutter iOS sample app that subclasses `FlutterAppDelegate`
2. Follow instructions to integrate Push notifications into your app, including the [ability to push background updates](https://developer.apple.com/documentation/usernotifications/pushing-background-updates-to-your-app) for silent push.
- In my attached sample app on Github, I've integrated with the push provider [Braze](https://www.braze.com/) and followed [these instructions](https://braze-inc.github.io/braze-swift-sdk/tutorials/braze/b1-standard-push-notifications).
3. Once fully integrated, run the iOS app _on a physical device_ and accept the Push authorization prompt
4. To verify that push is correctly integrated, send a visible push notification and verify it is received in either the background or foreground. Based on the state, it may trigger a different Apple API
- This API will trigger while in the foreground: [`userNotification(_:willPresent:withCompletionHandler:)`](https://developer.apple.com/documentation/usernotifications/unusernotificationcenterdelegate/usernotificationcenter(_:willpresent:withcompletionhandler:))
- In the background, no API will trigger unless you have `content-available:true` or if you interact with the notification
- If not using a Push provider, it is possible to [send a Push notification via command line](https://developer.apple.com/documentation/usernotifications/sending-push-notifications-using-command-line-tools) but it requires some setup
5. Now, prepare to test sending a silent push with `content-available:true`. Add a breakpoint or print statement in `application(_:didReceiveRemoteNotification:fetchCompletionHandler)` and `userNotification(_:willPresent:withCompletionHandler:)`.
- This silent push should [wake your app and trigger `application(_:didReceiveRemoteNotification:fetchCompletionHandler)`](https://developer.apple.com/documentation/usernotifications/pushing-background-updates-to-your-app#Receive-background-notifications) regardless if it is in the background or foreground. (There are some nuances [where delivery isn't guaranteed](https://developer.apple.com/documentation/usernotifications/pushing-background-updates-to-your-app#:~:text=update%20its%20content.-,Important,-The%20system%20treats), but we aren't concerned about it here)
6. In either background and foreground state, send a silent push to your app.
7. None of your breakpoints or print statements will be triggered. The silent push does not wake your app nor is the notification received by the Flutter iOS app.
<details>
<summary>Sample Push notification payloads used</summary>
<br>
```
// Visible payload
{
"ab" : {
"att" : {
"aof" : 0,
"type" : null,
"url" : ""
},
"c" : "dGVzdF90ZXN0X2RpPTY2ZWRlNTlmMjQ0MzU4ZjY3Zjk5OTYwMTFjZjFhODlm"
},
"aps" : {
"alert" : {
"body" : "Text, no image",
"title" : "Simple content"
},
"interruption-level" : "active",
"mutable-content" : 1,
"sound" : ""
}
}
// Silent Push payload
{
"ab" : {
"c" : "dGVzdF90ZXN0X2RpPTY2ZWRlN2RlZDQ1YzY3NmNjYjNlODM0NWE2ZmQ3ZDJm"
},
"anythingElse" : "hi",
"aps" : {
"content-available" : 1,
"interruption-level" : "active"
},
"isVisible" : "false",
"shouldBeSilentPush" : "Yes"
}
```
</details>
### Additional context:
In the Flutter framework source code, there is some complex custom logic being done in [`FlutterAppDelegate.mm`](https://api.flutter.dev/ios-embedder/_flutter_app_delegate_8mm_source.html) and [`FlutterPluginAppLifeCycleDelegate.mm`](https://api.flutter.dev/ios-embedder/_flutter_plugin_app_life_cycle_delegate_8mm_source.html) that handles processing push notifications. However, this is outside of Apple's typical practices and it is possible that it interferes downstream with standard iOS push integrations.
### Expected results
In a Flutter iOS app which subclasses `FlutterAppDelegate` (by default) that successfully integrates Push Notifications, receiving a silent notification (where `content-available:true` with no visible content) does wake the Flutter app and trigger the expected Apple APIs, as noted in [Apple's official docs](https://developer.apple.com/documentation/usernotifications/pushing-background-updates-to-your-app#Receive-background-notifications):
- [`application(_:didReceiveRemoteNotification:fetchCompletionHandler)`](https://developer.apple.com/documentation/uikit/uiapplicationdelegate/1623013-application)
- [`userNotification(_:willPresent:withCompletionHandler:)`](https://developer.apple.com/documentation/usernotifications/unusernotificationcenterdelegate/usernotificationcenter(_:willpresent:withcompletionhandler:)) (if in the foreground)
### Actual results
Upon sending a silent push to a Flutter iOS app which subclasses `FlutterAppDelegate`, the silent push is not processed by the Flutter app and none of the expected Apple APIs are triggered
### Code sample
<details open><summary>Sample project on Github</summary>
https://github.com/hokstuff/FlutterAppDelegate_silentPushSample
</details>
### Screenshots or Video
_No response_
### Logs
<details open><summary>Logs</summary>
```console
flutter run
Connected devices:
Braze iPhone 1️⃣2️⃣ (mobile) • 00008101-000C39CA3A40001E • ios • iOS 18.0
22A3354
iPhone 15 (mobile) • 5AD2D6E6-3C49-403B-86D7-BB642AE9B37B • ios •
com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
macOS (desktop) • macos • darwin-arm64 • macOS 14.5
23F79 darwin-arm64
Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.5
23F79 darwin-arm64
Chrome (web) • chrome • web-javascript • Google Chrome
128.0.6613.138
No wireless devices were found.
[1]: Braze iPhone 1️⃣2️⃣ (00008101-000C39CA3A40001E)
[2]: iPhone 15 (5AD2D6E6-3C49-403B-86D7-BB642AE9B37B)
[3]: macOS (macos)
[4]: Mac Designed for iPad (mac-designed-for-ipad)
[5]: Chrome (chrome)
Please choose one (or "q" to quit): 1
Launching lib/main.dart on Braze iPhone 1️⃣2️⃣ in debug mode...
ios/Runner/AppDelegate.swift uses the deprecated @UIApplicationMain attribute, updating.
Automatically signing iOS for device deployment using specified development team in Xcode project:
5GLZKGNWQ3
Running pod install... 1,642ms
Running Xcode build...
└─Compiling, linking and signing... 5.7s
Xcode build done. 25.7s
You may be prompted to give access to control Xcode. Flutter uses Xcode to run your app. If access is not
allowed, you can change this through your Settings > Privacy & Security > Automation.
Installing and launching... 29.3s
Syncing files to device Braze iPhone 1️⃣2️⃣... 34ms
Flutter run key commands.
r Hot reload. 🔥🔥🔥
R Hot restart.
h List all available interactive commands.
d Detach (terminate "flutter run" but leave application running).
c Clear the screen
q Quit (terminate the application on the device).
A Dart VM Service on Braze iPhone 1️⃣2️⃣ is available at: http://127.0.0.1:58927/oc87Tn3nhKI=/
The Flutter DevTools debugger and profiler on Braze iPhone 1️⃣2️⃣ is available at:
http://127.0.0.1:9100?uri=http://127.0.0.1:58927/oc87Tn3nhKI=/
Lost connection to device.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.5 23F79 darwin-arm64, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0-rc3)
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2022.3)
[✓] Android Studio (version 2023.2)
[✓] VS Code (version 1.93.1)
[✓] Connected device (5 available)
[✓] Network resources
• No issues found!
```
</details>
| platform-ios,engine,a: existing-apps,P2,workaround available,team-ios,triaged-ios | low | Critical |
2,539,772,063 | godot | Editor freezes when Possibly High Vram Usage with low Ram | ### Tested versions
- reproducible in Godot 4.3 stable
### System information
Windows 10 Pro x64 Godot 4.3 stable Nvidia GeForce mx110 2gb Vram, Forward+ rendering
### Issue description
Whenever I run my project that has high graphics the game and Godot works pretty fine together but sometimes out of nowhere when changing a value especially on Animation player editor freezes as if I have an IGPU that failed and recovered successfully (old intel notification) but I use mx110 and Godot recognises that as well not sure why but I believe this is because of memory leak
### Steps to reproduce
Str:
- have an 3gb MX110 equivalent+ 4GB ram and run a high graphics based game while ram is low and Vram is almost full
### Minimal reproduction project (MRP)
No needed just make a Godot project with volumetric coulds SSIL + SSAO On 720p arround 800k-1mil vertices | topic:rendering,topic:editor,needs testing | low | Critical |
2,539,815,343 | pytorch | [c10d][nccl][cuda] Regression (unspecific cuda launch error) with test_c10d_nncl | ### 🐛 Describe the bug
When running
python test/distributed/test_c10d_nccl.py -k test_nan_assert_float16 on a H100x2 platform,
the current nightly (and likely v2.5.0 RC) is producing the following cuda error:

It did not check return code, because:

Tested with ghcr.io/pytorch/pytorch-nightly:2.5.0.dev20240818-cuda12.4-cudnn9-devel , the test did not generate errors other than failing the assertion check.
Bisected to https://github.com/pytorch/pytorch/pull/134300 (cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @atalman @malfet )
i.e. this commit https://github.com/pytorch/pytorch/commit/3645634f3c3930b8fcb517e3b4072db5647db2fb
To reproduce on a 2xGPU platform:
docker pull ghcr.io/pytorch/pytorch-nightly:2.6.0.dev20240918-cuda12.4-cudnn9-devel
clone pytorch and checkout to the above commit (364563)
run:
python test/distributed/test_c10d_nccl.py -k test_nan_assert_float16
### Versions
Bisected to https://github.com/pytorch/pytorch/pull/134300 (cc @kwen2501 @atalman @malfet )
i.e. this commit https://github.com/pytorch/pytorch/commit/3645634f3c3930b8fcb517e3b4072db5647db2fb
cc @eqy @Aidyn-A @ptrblck | oncall: distributed,module: nccl | low | Critical |
2,539,815,504 | go | proposal: cmd/go: allow disabling build cache trimming | ### Proposal Details
We have our own docker image containing our Go toolchain. As an optimization, we also pre-compile the standard library (`go build std`) so that it is in the gocache.
Evidently, the various Go tools will trim the cache upon exit. In particular, they will delete any files in the cache that have not been modified in the last five days. While this may be reasonable for a cache on someone's host machine, it does not make sense for a short-lived container using a cache from its image, since the mtimes will all reflect when the image was built. At best, it trimming the cache is pointless because the container will be destroyed shortly, and at worst is actively counterproductive as those cached files may be used by subsequent commands.
Please add an environment variable to disable trimming the gocache upon command exit. The specific name is immaterial, but something like `GOCACHE_AUTO_TRIM=0` would be fine. | Proposal | low | Major |
2,539,831,932 | Web-Dev-For-Beginners | by tweaking mouse interaction bindings, which brings me to ths 8.5 year old feature request that is massively upvoted. | > Recently I've run into an issue where ctrl+click on a typescript type goes to the wrong definition. Perhaps I could debug or fix it by tweaking mouse interaction bindings, which brings me to this 8.5 year old feature request that is massively upvoted.
>
> I mean, they mention that they prioritise based on issue upvotes. This is the second most upvoted issue, should it then not be prioritised really highly?
>
> Is there a plugin for mouse interaction bindings? Perhaps a status on this feature?
So far the only thing that works is to use AHK macros.
_Originally posted by @brunolm in https://github.com/microsoft/vscode/issues/3130#issuecomment-2341317764_
| no-issue-activity | low | Critical |
2,539,838,810 | PowerToys | Workspaces does not detect Godot | ### Microsoft PowerToys version
0.84.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Same error as here, even with version 0.84.1
https://github.com/microsoft/PowerToys/issues/34558
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Critical |
2,539,839,261 | react | [Compiler Bug]: function parameter inside component override outer parameter incorrectly | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhHCA7MAXABASwwBMEAPAfQAZcBeXGBARynwYAoAdEAOgHpIAtgl6ESpLgEoA3BwyyAZlAxxs+TLgBiMfAmIAZfDjbACxMrgC+E3MFm5c6LHnnbdRAAoBDDAgA2YWlwXHWIwbgFPAAc2NmC3ABpTMWsaAD4bO3tcXl5cMARHIlxPGABzXAh5XAjIioA3BBhtElxsAAsECqhsRqSyKmL5HpgHCAFI-F8ETPsGbFgMXAAeUX7KbgBlMYQvH19UmayuOOJkLlwAaiDXYku+0hlFrKWRMwp1raFdvwOnq0fZgh5jBFksiPg6r8sjYTh5vH4wBZMi9wZDZLIkRgQBYgA
### Repro steps
Using webpack with typescript.
`import { SomePanel } "./some/index";` may compiled into something like `const index_0 = __importDefault(__webpack_require__("./some/index.js"));`
React compiler don't recongize it, and compile the `index` arg in `array.map((value, index)=>{})` into `index_0`, which falsy override the imported module and become a number.
### How often does this bug happen?
Every time
### What version of React are you using?
19.0.0-rc-4c58fce7-20240904 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,539,856,227 | tauri | [feat] add `window.isAlwaysOnTop` method | ### Describe the problem
Since `window.setAlwaysOnTop` func,it is naturally to found the `isAlwaysOnTop` but I couldn't.
Is it a mistake?
### Describe the solution you'd like
Impl the `window.isAlwaysOnTop`,which returns `Promise<boolean>` as other similar func.
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,539,867,354 | tensorflow | tensorflow pjrt plugin | I've developed a pjrt plugin using https://openxla.org/xla/pjrt_integration as a resource, which is working with jax.
How do I load this plugin and run tensorflow models? Is there some kind of registration step, similar to jax? If I load the plugin into `import jax._src.xla_bridge` and call `tf.config.list_physical_devices()` I don't see my device, so I imagine it's something else. | stat:awaiting tensorflower,type:feature,comp:model,comp:xla | low | Minor |
2,539,908,368 | PowerToys | An UNICODE character | ### Microsoft PowerToys version
0.84.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Looking for the character "\~" in the keyboard shortcut mapping I realized that it does not exist. On the QWERTY keyboard (Chile) using the AltGr+4 combination it should be possible to type that character but it doesn't work, so I decided to install PowerToys to reassign the shortcut but it doesn't appear available to assign "\~" to a shortcut.




### ✔️ Expected Behavior
I would like you to add the character so I can use it instead of typing with alt+126, it's annoying having a quick way to do it.
### ❌ Actual Behavior
The character is missing and I have not been able to include it in the shortcut.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,539,909,825 | godot | Localization remaps don't work for themes in exported project | ### Tested versions
4.3 stable
### System information
Godot v4.3.stable - macOS 14.5.0 - Vulkan (Forward+) - integrated Apple M2 Pro - Apple M2 Pro (10 Threads)
### Issue description
I attempted to use localization remaps to switch between different themes(.tres files) to suit various languages and apply different font styles. Everything worked fine during testing in the editor, but after exporting, the remap didn’t function as expected — the label’s font and size didn’t change.
Maybe related to this one: [Changing the locale breaks remapped fonts](https://github.com/godotengine/godot/issues/80130)
### Steps to reproduce
1. Run the root.tscn.
2. Press 2 buttons to change the locale and check the font on the label.
3. Export project.
4. Run the exported project and follow the step 2 again.
### Minimal reproduction project (MRP)
[Archive.zip](https://github.com/user-attachments/files/17082499/Archive.zip)
| bug,needs testing,topic:export | low | Major |
2,539,950,830 | go | cmd/go: invoking go run from go test can corrupt build cache | ### Go version
go version go1.22.7 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/tmp/.gocache'
GOENV='/root/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.22.7'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/Users/rittneje/go-test-bug/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build1468831127=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
I have the following files:
go.mod
```
module gotestbug
go 1.22
```
pkg1/foo_test.go
```go
package pkg1
import (
"bytes"
"os/exec"
"path/filepath"
"testing"
)
func TestFoo(t *testing.T) {
cmd := exec.Command("go", "run", filepath.Join("testdata", "main.go"))
output := new(bytes.Buffer)
cmd.Stdout = output
cmd.Stderr = output
t.Logf("+ %s", cmd)
if err := cmd.Run(); err != nil {
t.Fatal(err, output)
}
}
```
pkg1/testdata/main.go
```go
package main
func main() {
}
```
pkg2/bar_test.go
```go
package pkg2
import (
"testing"
)
func TestBar(t *testing.T) {
t.Log("hello")
}
```
I then ran the following commands as **root** to prime the build cache. (I intentionally set the mtimes and trim.txt file to the past in order to reproduce the bug.)
```
go clean -cache
go build std
chmod -R a+rwX "${GOCACHE}"
find "${GOCACHE}" -type f -exec touch '{}' -d '2024-07-01 00:00:00' \;
printf '1719792000' > "${GOCACHE}/trim.txt"
```
Finally I ran `go test` as a **non-root** user. (I passed `-p=1` to force it to run one package at a time in order to make the bug happen deterministically.)
```go
go test -v -count=1 -p=1 ./...
```
### What did you see happen?
As described in #69565, since the build cache was crafted to look old, `go run` decided to trim it. This somehow causes `go test` to break.
```
=== RUN TestFoo
foo_test.go:16: + /usr/local/go/bin/go run testdata/main.go
--- PASS: TestFoo (0.15s)
PASS
ok gotestbug/pkg1 0.148s
# gotestbug/pkg2 [gotestbug/pkg2.test]
pkg2/bar_test.go:4:2: could not import testing (open /tmp/.gocache/26/26737db8c74b26401dc8828801cc3793a888d7c29c40b7500b7e9e5f96deec19-d: no such file or directory)
FAIL gotestbug/pkg2 [build failed]
FAIL
```
It should be noted that this issue does not happen if I run `go test` as root.
### What did you expect to see?
The test should pass without any compilation issues. | NeedsInvestigation | low | Critical |
2,539,987,216 | yt-dlp | [YouTube] API Can't download youtube video to buffer when download_ranges option is specified | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
**Issue:**
I am using the API to download a youtube video to a buffer in Python. I implemented the code described in this comment: https://github.com/yt-dlp/yt-dlp/issues/3298#issuecomment-1181754989 and it does work in downloading the requested video to the buffer.
However, if the `download_ranges` option is included to download a specific section, the video is not downloaded to the buffer but is instead (seemingly) written to stdout.
**To Reproduce:**
Run the below code snippet
```python
from yt_dlp import YoutubeDL
from yt_dlp.utils import download_range_func
from contextlib import redirect_stdout
from pathlib import Path
import io
youtube_id = "https://youtu.be/GDRyigWvUFg"
ctx = {
'download_ranges': download_range_func([], [[10.0, 20.0]]),
"outtmpl": {'default': "-"},
'logtostderr': True,
'verbose': True,
}
# Redirect stdout to the buffer and download video
buffer = io.BytesIO()
with redirect_stdout(buffer), YoutubeDL(ctx) as foo:
foo.download([youtube_id])
# Buffer should have video contents, write contents of buffer to file for example
Path(f"output.mp4").write_bytes(buffer.getvalue())
```
**Expected behaviour:**
File output.mp4 should contain the downloaded video (10s to 20s)
**Actual behaviour:**
File output.mp4 is empty. Unformatted characters are written to stdout (seems to be downloaded video content) Refer to verbose output included below.
**Note: I trimmed the full output to remove most of the outputted unformatted characters as it made the log too long to fit within the comment. I created a shared file that contains the full output which can be viewed on OneDrive: https://1drv.ms/t/s!Amo9F01EvCh5iYsHgMzObR1dYARHWg?e=7OYTaV**
The log still contains all other output from YoutubeDL. If this is against issue policy do advise, I am not sure how else to include the completed log
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8 (No VT), error utf-8 (No VT), screen utf-8 (No VT)
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (pip) API
[debug] params: {'download_ranges': yt_dlp.utils.download_range_func([], [[10, 20]]), 'outtmpl': {'default': '-'}, 'logtostderr': True, 'verbose': True, 'compat_opts': set(), 'http_headers': {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-us,en;q=0.5', 'Sec-Fetch-Mode': 'navigate'}}
[debug] Python 3.12.5 (CPython AMD64 64bit) - Windows-11-10.0.22631-SP0 (OpenSSL 3.0.13 30 Jan 2024)
[debug] exe versions: ffmpeg 2023-12-14-git-5256b2fbe6-full_build-www.gyan.dev (setts)
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.0.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1830 extractors
[youtube] Extracting URL: https://youtu.be/GDRyigWvUFg
[youtube] GDRyigWvUFg: Downloading webpage
[youtube] GDRyigWvUFg: Downloading ios player API JSON
[youtube] GDRyigWvUFg: Downloading web creator player API JSON
[debug] Loading youtube-nsig.a9d81eca from cache
[debug] [youtube] Decrypted nsig THszRR11GeoIlwf05 => j-wELkA1X4F79g
[youtube] GDRyigWvUFg: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec:vp9.2, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec:vp9.2(10), channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: best/bestvideo+bestaudio
[info] GDRyigWvUFg: Downloading 1 format(s): 18
[info] GDRyigWvUFg: Downloading 1 time ranges: 10.0-20.0
[debug] Invoking ffmpeg downloader on "https://rr2---sn-cxaaj5o5q5-tt1r.googlevideo.com/videoplayback?expire=1726916995&ei=I1XuZufgJfGSlu8Posex6A0&ip=70.54.96.253&id=o-AGtl-t_CNVjHxUyaKDK9T4WrVLpDOjhRDnaaMuC07ciI&itag=18&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&mh=CN&mm=31%2C26&mn=sn-cxaaj5o5q5-tt1r%2Csn-vgqsrnsd&ms=au%2Conr&mv=m&mvi=2&pl=24&initcwndbps=1926250&bui=AXLXGFSxPR6-NUy_yO-d18_euWCra8256sN3p77PZ1VVlKAYPK0sq74KSSV8JIfPOc3q9kLMToDPBEkO&spc=54MbxZcbqcGdv_0rWnUQqJ5RO1wn36ldilqLvgWEww7nLhISJfbb4Y0&vprv=1&svpuc=1&mime=video%2Fmp4&ns=KLWvIV8waBTNOnPQ8JdDqtsQ&rqh=1&gir=yes&clen=2202998&ratebypass=yes&dur=48.274&lmt=1723414544111661&mt=1726894855&fvip=1&fexp=51286683&c=WEB_CREATOR&sefc=1&txp=5538434&n=j-wELkA1X4F79g&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cratebypass%2Cdur%2Clmt&sig=AJfQdSswRQIgPGBkxoLeSFj9yjt3qPdCjCo2tWydQsHFAOi3GUjwAUoCIQC28MrXBSRd51LZqlurEPD3OmhbEzOWpyTWjbRBfQShog%3D%3D&lsparams=mh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=ABPmVW0wRAIgHeDfDHo4Bq8WJj2RItQTO0D2aQf6WC2NEb_H0XQYgvECIAFEWHnV4I9krzuaS0yhgc88uqO0Mtnynzgdm7tTEG2Z"
[download] Destination: -
[debug] ffmpeg command line: ffmpeg -y -loglevel verbose -headers "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Sec-Fetch-Mode: navigate
" -ss 10 -t 10 -i "https://rr2---sn-cxaaj5o5q5-tt1r.googlevideo.com/videoplayback?expire=1726916995&ei=I1XuZufgJfGSlu8Posex6A0&ip=70.54.96.253&id=o-AGtl-t_CNVjHxUyaKDK9T4WrVLpDOjhRDnaaMuC07ciI&itag=18&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&mh=CN&mm=31%2C26&mn=sn-cxaaj5o5q5-tt1r%2Csn-vgqsrnsd&ms=au%2Conr&mv=m&mvi=2&pl=24&initcwndbps=1926250&bui=AXLXGFSxPR6-NUy_yO-d18_euWCra8256sN3p77PZ1VVlKAYPK0sq74KSSV8JIfPOc3q9kLMToDPBEkO&spc=54MbxZcbqcGdv_0rWnUQqJ5RO1wn36ldilqLvgWEww7nLhISJfbb4Y0&vprv=1&svpuc=1&mime=video%2Fmp4&ns=KLWvIV8waBTNOnPQ8JdDqtsQ&rqh=1&gir=yes&clen=2202998&ratebypass=yes&dur=48.274&lmt=1723414544111661&mt=1726894855&fvip=1&fexp=51286683&c=WEB_CREATOR&sefc=1&txp=5538434&n=j-wELkA1X4F79g&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cratebypass%2Cdur%2Clmt&sig=AJfQdSswRQIgPGBkxoLeSFj9yjt3qPdCjCo2tWydQsHFAOi3GUjwAUoCIQC28MrXBSRd51LZqlurEPD3OmhbEzOWpyTWjbRBfQShog%3D%3D&lsparams=mh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=ABPmVW0wRAIgHeDfDHo4Bq8WJj2RItQTO0D2aQf6WC2NEb_H0XQYgvECIAFEWHnV4I9krzuaS0yhgc88uqO0Mtnynzgdm7tTEG2Z" -c copy -f mpegts -
ffmpeg version 2023-12-14-git-5256b2fbe6-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 58. 33.100 / 58. 33.100
libavcodec 60. 35.100 / 60. 35.100
libavformat 60. 18.100 / 60. 18.100
libavdevice 60. 4.100 / 60. 4.100
libavfilter 9. 14.100 / 9. 14.100
libswscale 7. 6.100 / 7. 6.100
libswresample 4. 13.100 / 4. 13.100
libpostproc 57. 4.100 / 57. 4.100
[tcp @ 000001b3dccd7b40] Starting connection attempt to 184.150.168.173 port 443
[tcp @ 000001b3dccd7b40] Successfully connected to 184.150.168.173 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001b3dccd58c0] Reconfiguring buffers to size 510422
[h264 @ 000001b3dd532040] Reinit context to 640x368, pix_fmt: yuv420p
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'https://rr2---sn-cxaaj5o5q5-tt1r.googlevideo.com/videoplayback?expire=1726916995&ei=I1XuZufgJfGSlu8Posex6A0&ip=70.54.96.253&id=o-AGtl-t_CNVjHxUyaKDK9T4WrVLpDOjhRDnaaMuC07ciI&itag=18&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&mh=CN&mm=31%2C26&mn=sn-cxaaj5o5q5-tt1r%2Csn-vgqsrnsd&ms=au%2Conr&mv=m&mvi=2&pl=24&initcwndbps=1926250&bui=AXLXGFSxPR6-NUy_yO-d18_euWCra8256sN3p77PZ1VVlKAYPK0sq74KSSV8JIfPOc3q9kLMToDPBEkO&spc=54MbxZcbqcGdv_0rWnUQqJ5RO1wn36ldilqLvgWEww7nLhISJfbb4Y0&vprv=1&svpuc=1&mime=video%2Fmp4&ns=KLWvIV8waBTNOnPQ8JdDqtsQ&rqh=1&gir=yes&clen=2202998&ratebypass=yes&dur=48.274&lmt=1723414544111661&mt=1726894855&fvip=1&fexp=51286683&c=WEB_CREATOR&sefc=1&txp=5538434&n=j-wELkA1X4F79g&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cratebypass%2Cdur%2Clmt&sig=AJfQdSswRQIgPGBkxoLeSFj9yjt3qPdCjCo2tWydQsHFAOi3GUjwAUoCIQC28MrXBSRd51LZqlurEPD3OmhbEzOWpyTWjbRBfQShog%3D%3D&lsparams=mh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=ABPmVW0wRAIgHeDfDHo4Bq8WJj2RItQTO0D2aQf6WC2NEb_H0XQYgvECIAFEWHnV4I9krzuaS0yhgc88uqO0Mtnynzgdm7tTEG2Z':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
encoder : Google
Duration: 00:00:48.27, start: 0.000000, bitrate: 365 kb/s
Stream #0:0[0x1](und): Video: h264 (Main), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, progressive, left), 640x360 (640x368) [SAR 1:1 DAR 16:9], 266 kb/s, 23.98 fps, 23.98 tbr, 24k tbn (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]
Stream #0:1[0x2](eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 95 kb/s (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]
[out#0/mpegts @ 000001b3dd4b1280] No explicit maps, mapping streams automatically...
[vost#0:0/copy @ 000001b3dd678d40] Created video stream from input stream 0:0
[aost#0:1/copy @ 000001b3dd80a040] Created audio stream from input stream 0:1
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
[mpegts @ 000001b3dccd9f80] service 1 using PCR in pid=256, pcr_period=83ms
[mpegts @ 000001b3dccd9f80] muxrate VBR, sdt every 500 ms, pat/pmt every 100 ms
Output #0, mpegts, to 'pipe:':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
encoder : Lavf60.18.100
Stream #0:0(und): Video: h264 (Main), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, progressive, left), 640x360 (0x0) [SAR 1:1 DAR 16:9], q=2-31, 266 kb/s, 23.98 fps, 23.98 tbr, 90k tbn (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 95 kb/s (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]
[out#0/mpegts @ 000001b3dd4b1280] Starting thread...
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001b3dccd5480] Starting thread...
Press [q] to stop, [?] for help
[tcp @ 000001b3dccdd680] Starting connection attempt to 184.150.168.173 port 443
[tcp @ 000001b3dccdd680] Successfully connected to 184.150.168.173 port 443
Automatically inserted bitstream filter 'h264_mp4toannexb'; args=''
� � *�������������������������������������������������������������������
NOTE: Character output was trimmed so that log could fit within max comment size, refer to comment for full output
���J�[��6D��C� �ꃆN��W�ꖙ��Z l��i2����@��6��.���1(G9^ ���������������������������������������������������������������������������������������������A5t�L�o�Uܶ]�: @�b����=J==3˵�C���yWw�>cM�Ȓ�J�4Ţ,�kܖC����FI�6�<��액�^�-D*r\N]B`[vist#0:0/h264 @ 000001b3dccbc280] All consumers of this stream are done
[aist#0:1/aac @ 000001b3dd41b100] All consumers of this stream are done
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001b3dccd5480] All consumers are done
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001b3dccd5480] Terminating thread with return code 0 (success)
[out#0/mpegts @ 000001b3dd4b1280] All streams finished
[out#0/mpegts @ 000001b3dd4b1280] Terminating thread with return code 0 (success)
[AVIOContext @ 000001b3dd4bccc0] Statistics: 700676 bytes written, 0 seeks, 352 writeouts
[out#0/mpegts @ 000001b3dd4b1280] Output file #0 (pipe:):
[out#0/mpegts @ 000001b3dd4b1280] Output stream #0:0 (video): 297 packets muxed (450928 bytes);
[out#0/mpegts @ 000001b3dd4b1280] Output stream #0:1 (audio): 534 packets muxed (148363 bytes);
[out#0/mpegts @ 000001b3dd4b1280] Total: 831 packets (599291 bytes) muxed
[out#0/mpegts @ 000001b3dd4b1280] video:440kB audio:145kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 16.917491%
size= 684kB time=N/A bitrate=N/A speed=N/A
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001b3dccd5480] Input file #0 (https://rr2---sn-cxaaj5o5q5-tt1r.googlevideo.com/videoplayback?expire=1726916995&ei=I1XuZufgJfGSlu8Posex6A0&ip=70.54.96.253&id=o-AGtl-t_CNVjHxUyaKDK9T4WrVLpDOjhRDnaaMuC07ciI&itag=18&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&mh=CN&mm=31%2C26&mn=sn-cxaaj5o5q5-tt1r%2Csn-vgqsrnsd&ms=au%2Conr&mv=m&mvi=2&pl=24&initcwndbps=1926250&bui=AXLXGFSxPR6-NUy_yO-d18_euWCra8256sN3p77PZ1VVlKAYPK0sq74KSSV8JIfPOc3q9kLMToDPBEkO&spc=54MbxZcbqcGdv_0rWnUQqJ5RO1wn36ldilqLvgWEww7nLhISJfbb4Y0&vprv=1&svpuc=1&mime=video%2Fmp4&ns=KLWvIV8waBTNOnPQ8JdDqtsQ&rqh=1&gir=yes&clen=2202998&ratebypass=yes&dur=48.274&lmt=1723414544111661&mt=1726894855&fvip=1&fexp=51286683&c=WEB_CREATOR&sefc=1&txp=5538434&n=j-wELkA1X4F79g&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cratebypass%2Cdur%2Clmt&sig=AJfQdSswRQIgPGBkxoLeSFj9yjt3qPdCjCo2tWydQsHFAOi3GUjwAUoCIQC28MrXBSRd51LZqlurEPD3OmhbEzOWpyTWjbRBfQShog%3D%3D&lsparams=mh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=ABPmVW0wRAIgHeDfDHo4Bq8WJj2RItQTO0D2aQf6WC2NEb_H0XQYgvECIAFEWHnV4I9krzuaS0yhgc88uqO0Mtnynzgdm7tTEG2Z):
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001b3dccd5480] Input stream #0:0 (video): 298 packets read (454143 bytes);
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001b3dccd5480] Input stream #0:1 (audio): 535 packets read (148632 bytes);
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001b3dccd5480] Total: 833 packets (602775 bytes) demuxed
[AVIOContext @ 000001b3dd41bd40] Statistics: 655360 bytes read, 1 seeks
[download] 100% in 00:00:00
```
| bug,triage,site:youtube,core:downloader | low | Critical |
2,539,991,122 | godot | Area2D's don't receive input from Subviewports | ### Tested versions
Tested in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated AMD Radeon RX 6800 XT (Advanced Micro Devices, Inc.; 32.0.11037.4004) - AMD Ryzen 9 5950X 16-Core Processor (32 Threads)
### Issue description
Area2d nodes aren't getting any input when they're inside a subviewport. Handle input locally, pickable, monitorable, all correctly set to on. Pushing all input events into the scene and nothing; First discovered in my current game where I'm rendering 2d scenes via subviewports in 3d. Decided to test in purely 2d, same issue.
Made a test project to show side by side, area2d receives no input while a control node works fine.
### Steps to reproduce
Have a 2d scene with area2d inside of it set to handle input (printing out all inputs on input_event signal as example). Put this 2d scene inside a viewport and those inputs aren't being received, ie, nothing will print out.
The purely 2d scene can be run alone and can be shown to work fine.


### Minimal reproduction project (MRP)
[guitest.zip](https://github.com/user-attachments/files/17082978/guitest.zip)
| enhancement,documentation,topic:input,topic:gui | low | Major |
2,539,994,410 | Python | Genetic Algorithm for Function Optimization | ### Feature description
This feature would introduce a new genetic algorithm-based approach to optimize mathematical functions. It would allow users to find the minimum or maximum values of continuous functions using genetic algorithms.
#### Key Components:
- **Target Function**: The user can define their own function to optimize, for example, \( f(x, y) = x^2 + y^2 \), or more complex functions.
- **Population Initialization**: Randomly generate initial solutions (chromosomes) within a defined search space.
- **Fitness Function**: Evaluate each chromosome's fitness based on how close the function value is to the desired optimum (minimization or maximization).
- **Selection**: Use selection methods like tournament selection or roulette wheel to pick parents for crossover based on their fitness scores.
- **Crossover**: Implement crossover strategies such as one-point, two-point, or uniform crossover to combine parent chromosomes into offspring.
- **Mutation**: Introduce random mutations to offspring to maintain diversity in the population and avoid local minima.
- **Termination Condition**: Allow the algorithm to stop after a set number of generations or when the improvement between generations is minimal.
This implementation would be useful for users needing a flexible and easy-to-use method for solving optimization problems in continuous spaces. | enhancement | low | Minor |
2,540,038,491 | opencv | Error compiling 4.1.0 source code with mingw-w64: 'D3D10CalcSubresource' was not declared in this scope' | ### System Information
OpenCV => 4.10.0
Operating System / Platform => Windows 10 64bit
Compiler => mingw-w64 8.1.0-posix-seh-rt_v6-rev0
### Detailed description
When I compiled the 4.1.0 version with mingw-w64, there was an error

### Steps to reproduce
In my c-make gui there is no D3D10 but only D3D11 so I don't know what to do with this error
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: build/install,platform: win32 | low | Critical |
2,540,072,442 | godot | Members of custom nodes in a scene are not accessible when its script is running in the editor | ### Tested versions
- Reproducible in 4.3.stable, 4.2.stable, 4.1.stable
### System information
Godot v4.x.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Ti (NVIDIA; 32.0.15.5599) - 13th Gen Intel(R) Core(TM) i5-13600KF (20 Threads)
### Issue description
I was trying to test some custom classes using a scene that runs in the editor. Things seemed to work fine until inheritance got involved. I then proceeded to create an MRP, and found that this breaks even without inheritance.
When working with a custom class in a script marked with `@tool`, I get errors as if the custom class's members are missing. Additionally, on one of my projects, I was able to access inherited members/methods in a derived class, but not outside of the derived class. The documentation for inherited members/methods in the derived class is also entirely absent.
None of these issues occur outside the editor. Errors are not generated while running the scene.
I am not sure if these issues are unrelated so I am lumping them into one issue.
I tried deleting the `.godot` folder, which did not solve the issue.
### Steps to reproduce
To get the errors:
- Create a custom class derived from a built-in node.
- Create a new scene.
- Attach a script marked with `@tool` to the scene.
- Add an instance of your custom node to the scene.
- Make the script modify members or call methods from this custom node, while still in the editor.
To see that this doesn't happen outside of the editor:
- Simply make your script modify the custom node outside the editor
- Run the scene.
To see weird issues with documentation and inheritance:
- Create a class that is derived from your custom node.
- Do not redefine anything inherited from the custom node.
- Open the custom documentation for your classes.
- Notice how things defined in the base class are absent from the derived class's documentation.
- In the editor, try accessing inherited things from the derived class, but not from outside the derived class.
- Now try doing so from outside the class. You will get an error.
### Minimal reproduction project (MRP)
[inheritance-bug.zip](https://github.com/user-attachments/files/17083386/inheritance-bug.zip)
| discussion,topic:editor,documentation | low | Critical |
2,540,083,310 | transformers | Enable changing the loss function by making the hard-coded `loss_fct` an attribute of `BertForTokenClassification`. | ### Feature request
In the method `transformers.models.bert.modeling_bert.BertForTokenClassification.forward`, the `loss_fct = CrossEntropyLoss()` is currently hard-coded. To change the loss function (e.g., to set class weights in `CrossEntropyLoss`), one must currently monkey-patch the model. By making `loss_fct` an attribute (e.g., `self.loss_fct`), users can simply replace it and use custom loss functions during training.
### Motivation
The motivation behind this proposal stems from the need to change the loss function for fine-tuning a pre-trained BERT model for token classification, particularly when dealing with imbalanced classes. In my use case, I need to prioritize recall, as most tokens belong to the "other" class. To achieve this, I need to set custom weights in the `CrossEntropyLoss`, like this:
```python
loss_fct = CrossEntropyLoss(weight=torch.tensor([0.1, 1.0, 1.0, 2.0, 2.0], device=self.device)
```
However, since the loss function is hard-coded inside the `forward` method, modifying it currently requires overriding the entire method just to change one line, as shown here:
```python
@patch
def forward(
self: BertForTokenClassification,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], 'TokenClassifierOutput']:
r"""
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
sequence_output = outputs[0]
sequence_output = self.dropout(sequence_output)
logits = self.classifier(sequence_output)
loss = None
if labels is not None:
class_weights = torch.tensor([0.1, 1.0, 1.0, 2.0, 2.0], device=self.device)
loss_fct = CrossEntropyLoss(weight=class_weights) # <------------------- only change
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return TokenClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
```
By turning `loss_fct` into an attribute, we could avoid the need monkey-patching. The change could be as simple as:
```python
class_weights = torch.tensor([0.1, 1.0, 1.0, 2.0, 2.0], device=model.device)
model.loss_fct = CrossEntropyLoss(weight=class_weights)
```
This would leave existing code unchanged but make it easier to swap in a custom loss function when needed.
### Your contribution
I am new to this repository and this would be my first pull request. I would like to ask if these types of changes are welcomed, and if it makes sense to proceed with submitting a pull request for this improvement. | Core: Modeling,Usage,Feature request | low | Minor |
2,540,092,175 | vscode | Is it possible to show more recent items in the welcome page? | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
Is it possible to show more recent items in the welcome page?
| feature-request,workbench-welcome | low | Minor |
2,540,092,791 | java-design-patterns | Translate to Persian | I would like to propose adding a Persian translation to this repository. There is a growing enthusiasm for design patterns among Persian-speaking developers, and providing the content in Persian would greatly benefit the community. @iluwatar | info: help wanted,type: feature,epic: documentation | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.