id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,475,181,557 | ui | [feat]: Missing in Command component for Asynchronous results - Command.Loading | ### Feature description
Missing Command.Loading from "cmdk".
[[Asynchronous results](https://github.com/pacocoursey/cmdk?tab=readme-ov-file#asynchronous-results)](url)
### Affected component/components
Command
### Additional Context
```
const CommandLoading = React.forwardRef<
React.ElementRef<typeof CommandPrimitive.Loading>,
React.ComponentPropsWithoutRef<typeof CommandPrimitive.Loading>
>((props, ref) => (
<CommandPrimitive.Loading
ref={ref}
className='py-6 text-center text-sm'
{...props}
/>
))
CommandLoading.displayName = CommandPrimitive.Loading.displayName
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,475,199,570 | excalidraw | [new feature] connection flow animation | Is there a way to add a connection process animation?
demo:[URL](https://viewer.diagrams.net/?tags=%7B%7D&lightbox=1&highlight=0000ff&edit=_blank&layers=1&nav=1#R%3Cmxfile%3E%3Cdiagram%20name%3D%22%E7%AC%AC%201%20%E9%A1%B5%22%20id%3D%225rhhy-bjW_c8JY8hIr5J%22%3E5ZbRTuswDIafppdIa9MOuISxcxACCWlCXIfVtJnSuspcuvL0pMRdGwaDCaGDdK7WfLET%2B7fdLhCzYvPXyCq%2FwRR0EE3STSAugigKJ8nE%2FnSkdSSJpw5kRqVsNICFeobek2mtUlh7hoSoSVU%2BXGJZwpI8Jo3Bxjd7RO3fWskMdsBiKfUuvVcp5Y6eRMcDvwSV5f3N4fTU7RSyN%2BZM1rlMsRkhMQ%2FEzCCSeyo2M9CdeL0uzu%2FPB7vbwAyU9BWH9upujidFmz1Mi%2BqUVqtWiCPhTnmSuuaEOVhqewUgtYLwEg3lmGEp9Xyg5wbrMoXumoldDTbXiJWFoYUrIGq5urImtCinQvPuo8bmrFRWMoUlMxdHd%2FmH%2BTJaY22WsCfJvm%2BkyYD22EXbqth2BiyATGv9DGgb2JMfh%2BS%2ByrZ27HpmjGxHBhWqktajk287YA14ROIwdifygITTN2U8zN4%2BuAj61SiVAb22xgFtEn7eJn4TNLkiWFTytS6NfTX4BefjwBBs9pd3txzsIBJfh2Mes2aY0rAfvXw0oSzXuwUciXi4Rsn%2FMErRF0cp%2FuYofasS0e%2Fv1u337Z%2B1a%2Fz7RRLJz4lkl8On170mhz8wYv4C%3C%2Fdiagram%3E%3C%2Fmxfile%3E) | enhancement | low | Minor |
2,475,268,014 | PowerToys | quick accent bug for letter "a" | ### Microsoft PowerToys version
0.83.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Quick Accent
### Steps to reproduce
I need to produce this letter (ā), small letter "a" with horizontal line on top. But when I do it with quick accent, the line is hanging on the next letter.

### ✔️ Expected Behavior
the line should be on top of letter 'a'
### ❌ Actual Behavior
the line goes to the next letter, regardless of what the next letter is.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,475,276,718 | godot | .rotated() of a Vector2/3 returns a result with floating point error | ### Tested versions
- Reproducible in 4.2 stable
### System information
Godot v4.2.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Laptop GPU (NVIDIA; 32.0.15.5599) - AMD Ryzen 5 5600H with Radeon Graphics (12 Threads)
### Issue description
When rotating a Vector2 / Vector3 that has only the integer parts, by 90, 180 or 270 degrees, the result of Vector2.rotated() or Vector3.rotated() returns a value with a floating point error. When printing the result into the console, the result appears as if the floating point error isn't there (though there is a clue in a form of -0 showing up as one of the vector components sometimes, depending on the vector you'd be rotating).
This results in unexpected, game breaking behaviour for projects that rely on precise integer grid calculations, which was the case for me.
### Steps to reproduce
Open and start the MRP
### Minimal reproduction project (MRP)
[Vector.rotated() Bug showcase.zip](https://github.com/user-attachments/files/16674212/Vector.rotated.Bug.showcase.zip) | discussion,topic:core,documentation | low | Critical |
2,475,307,663 | vscode | SCM Graph - HTML tags are omitted from pop-ups for git commit messages | Type: <b>Bug</b>
HTML tags seem to be omitted from the small pop-up that appears when you hover over a commit message. I've noticed this in the "Timeline" section in the left sidebar and in "Source Control."
**Only opening tag, Timeline:**

**Only opening tag, Source Control:**

**Opening and closing tags, Timeline:**

**Opening and closing tags, Source Control:**

VS Code version: Code 1.92.2 (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (8 x 2419)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.80GB (2.34GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (18)</summary>
Extension|Author (truncated)|Version
---|---|---
ruff|cha|2024.42.0
vscode-eslint|dba|3.0.10
css-property-sorter|Enz|1.2.3
prettier-vscode|esb|11.0.0
vscode-htmlhint|HTM|1.0.5
rainbow-csv|mec|3.12.0
debugpy|ms-|2024.10.0
python|ms-|2024.12.3
vscode-pylance|ms-|2024.8.1
autodocstring|njp|0.6.1
sourcery|sou|1.22.0
vscode-stylelint|sty|1.4.0
errorlens|use|3.20.0
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
vscode-icons|vsc|12.8.0
five-server|yan|0.3.1
html-css-class-completion|Zig|1.20.0
</details>
<!-- generated by issue reporter --> | bug,scm | low | Critical |
2,475,359,867 | storybook | [Bug]: External CSS file for lit crashes with "does not provide an export named 'default'" | ### Describe the bug
We use Lit with shadow DOM and we use external stylesheets that we import using `import styles from './button.css?lit-css'`
To achieve this, we added the appropriate configuration for Typescript and Vite.
Unfortunately due to the .css extension that is handled somewhat magically in Storybook, we get the error.
Please change the way Storybook handles css and pcss files to handle lit components like ours.
### Reproduction link
https://github.com/johan-gorter/storybook-lit-css-issue
### Reproduction steps
1. Start storybook and view the Button story
2. The syntax error occurs
### System
```bash
System:
OS: Windows 11 10.0.22631
CPU: (16) x64 Intel(R) Core(TM) i7-10875H CPU @ 2.30GHz
Binaries:
Node: 20.11.0 - C:\Program Files\nodejs\node.EXE
Yarn: 1.22.22 - C:\Program Files\nodejs\yarn.CMD <----- active
npm: 9.5.0 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.5.0 - C:\Program Files\nodejs\pnpm.CMD
Browsers:
Edge: Chromium (127.0.2651.98)
npmPackages:
@storybook/addon-essentials: ^8.2.9 => 8.2.9
@storybook/addon-links: ^8.2.9 => 8.2.9
@storybook/blocks: ^8.2.9 => 8.2.9
@storybook/test: ^8.2.9 => 8.2.9
@storybook/web-components: ^8.2.9 => 8.2.9
@storybook/web-components-vite: ^8.2.9 => 8.2.9
storybook: ^8.2.9 => 8.2.9
npmGlobalPackages:
storybook: 8.1.2
```
### Additional context
We are currently replacing ?lit-css with ?raw&lit-css which does work (see previous commits). We think the raw is actually confusing, because we are also using postcss.
It would also be nice to have the default lit template use the shadow DOM, so it looks like a bit more like the examples on https://lit.dev/. | bug,help wanted,web-components,lit,sev:S3 | low | Critical |
2,475,391,656 | godot | Bugs with GDExtension API validation logic (`--validate-extension-api` command and CI) | ### Tested versions
- Reproducible in 4.4.dev (826de7976a6add282c7b14d4be2a7e6d775821d8), #90993, and older versions too
### System information
Any
### Issue description
We have some bugs in the GDExtension API validation checks that we perform on CI, which are starting to hamper development as we need to work them around in non-trivial ways.
#90993 is a good example of that as it's impacting a complex method, `RenderingDevice::draw_list_begin` which changed multiple times since Godot 4.0:
- In 4.1-stable, its 9th argument was changed from an `Array` to a `TypedArray<RID>` (`Validate extension JSON: Error: Field 'classes/RenderingDevice/methods/draw_list_begin/arguments/9': type changed value in new API, from "Array" to "typedarray::RID".`).
- In 4.3-stable, its 9th argument was removed (`Validate extension JSON: Error: Field 'classes/RenderingDevice/methods/draw_list_begin/arguments': size changed value in new API, from 10 to 9.`).
- In #90993 (intended for 4.4-stable), a new 9th argument with a different name, type and default value was added (`Validate extension JSON: Error: Field 'classes/RenderingDevice/methods/draw_list_begin/arguments': size changed value in new API, from 9 to 10.`).
To make CI checks pass, I had to add changes to `4.0-stable_4.1-stable.expected` and `4.2-stable_4.3-stable.expected` that reflects changes done in 4.4, which seems wrong to me. And also contradicts previous messages in the same `.expected` files for the API breakage done in their respective versions.
```diff
diff --git a/misc/extension_api_validation/4.0-stable_4.1-stable.expected b/misc/extension_api_validation/4.0-stable_4.1-stable.expected
index 5c3bf07fb2..2b7f9dc662 100644
--- a/misc/extension_api_validation/4.0-stable_4.1-stable.expected
+++ b/misc/extension_api_validation/4.0-stable_4.1-stable.expected
@@ -349,3 +349,11 @@ Validate extension JSON: Error: Hash changed for 'classes/EditorUndoRedoManager/
Validate extension JSON: Error: Hash changed for 'classes/UndoRedo/methods/create_action', from 0AEC1BFC to E87757EB. This means that the function has changed and no compatibility function was provided.
Added a optional parameters with default values. No adjustments should be necessary.
+
+
+GH-90993
+--------
+Validate extension JSON: Error: Field 'classes/RenderingDevice/methods/draw_list_begin/arguments/9': type changed value in new API, from "Array" to "int".
+
+FIXME: Hack to workaround a bug in our validation logic with a method that changed arguments count, type, and default values multiple times.
+The actual last change triggering this error happened in 4.4 (GH-90993).
diff --git a/misc/extension_api_validation/4.2-stable_4.3-stable.expected b/misc/extension_api_validation/4.2-stable_4.3-stable.expected
index ce8f24c7a9..3de9736c8d 100644
--- a/misc/extension_api_validation/4.2-stable_4.3-stable.expected
+++ b/misc/extension_api_validation/4.2-stable_4.3-stable.expected
@@ -380,3 +380,12 @@ Validate extension JSON: Error: Field 'classes/Image/methods/get_mipmap_offset/r
Type changed to int64_t to support baking large lightmaps.
No compatibility method needed, both GDExtension and C# generate it as int64_t anyway.
+
+
+GH-90993
+--------
+Validate extension JSON: Error: Field 'classes/RenderingDevice/methods/draw_list_begin/arguments/9': default_value changed value in new API, from "Array[RID]([])" to "0".
+Validate extension JSON: Error: Field 'classes/RenderingDevice/methods/draw_list_begin/arguments/9': type changed value in new API, from "typedarray::RID" to "int".
+
+FIXME: Hack to workaround a bug in our validation logic with a method that changed arguments count, type, and default values multiple times.
+The actual last change triggering this error happened in 4.4 (GH-90993).
diff --git a/misc/extension_api_validation/4.3-stable.expected b/misc/extension_api_validation/4.3-stable.expected
index 24c7702090..5c869d9914 100644
--- a/misc/extension_api_validation/4.3-stable.expected
+++ b/misc/extension_api_validation/4.3-stable.expected
@@ -14,3 +14,11 @@ Validate extension JSON: Error: Field 'classes/ShapeCast2D/properties/collision_
Validate extension JSON: Error: Field 'classes/ShapeCast3D/properties/collision_result': getter changed value in new API, from "_get_collision_result" to &"get_collision_result".
These getters have been renamed to expose them. GDExtension language bindings couldn't have exposed these properties before.
+
+
+GH-90993
+--------
+Validate extension JSON: Error: Field 'classes/RenderingDevice/methods/draw_list_begin/arguments': size changed value in new API, from 9 to 10.
+
+draw_list_begin added a new optional debug argument called breadcrumb.
+There used to be an RID argument as arg #9 which was removed in GH-84976 but now we're adding a new one in the same location.
```
---
Aside from the above issue, we have a few other long-running problems that trigger recurring warnings in the logs, but somehow don't seem to break the CI checks, or we'd have fixed them ages ago:
```
ERROR: New API lacks base array: enums
at: compare_dict_array (./core/extension/extension_api_dump.cpp:1362)
```
Happens when checking the `4.0-stable_4.1-stable.expected` and `4.1-stable_4.2-stable.expected` files.
```
ERROR: Validate extension JSON: Missing field in current API 'classes/RenderingDevice/methods/draw_list_end': arguments. This is a bug.
at: compare_dict_array (./core/extension/extension_api_dump.cpp:1443)
ERROR: Validate extension JSON: Missing field in current API 'classes/RenderingDevice/methods/compute_list_begin': arguments. This is a bug.
at: compare_dict_array (./core/extension/extension_api_dump.cpp:1443)
ERROR: Validate extension JSON: Missing field in current API 'classes/RenderingDevice/methods/compute_list_end': arguments. This is a bug.
at: compare_dict_array (./core/extension/extension_api_dump.cpp:1443)
```
Happens for all stable version checks, but somehow not in the dev version (currently `4.3-stable.expected`). This was the same in the previous dev cycle where it didn't occur for `4.2-stable.expected`, so it's not that it was fixed in 4.3 / 4.4.dev.
### Steps to reproduce
```
./misc/scripts/validate_extension_api.sh godot
```
### Minimal reproduction project (MRP)
n/a | bug,topic:buildsystem,topic:gdextension | low | Critical |
2,475,467,927 | rust | `unused_qualification` breaks code for older rust versions | ### Code
```Rust
// Cargo.toml: a rust-version <= 1.79 (while being on 1.80 or nightly)
#![warn(unused_qualifications)]
let align = core::mem::align_of::<u128>();
dbg!(align);
```
### Current output
```Shell
warning: unnecessary qualification
--> src/main.rs:9:17
|
9 | let align = core::mem::align_of::<u128>();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: the lint level is defined here
--> src/main.rs:2:9
|
2 | #![warn(unused_qualifications, unused_imports)]
| ^^^^^^^^^^^^^^^^^^^^^
help: remove the unnecessary path segments
|
9 - let align = core::mem::align_of::<u128>();
9 + let align = align_of::<u128>();
|
```
### Desired output
The lint should not be triggered as `core::mem::align_of` is only part of the prelude since 1.80 (see https://github.com/rust-lang/rust/pull/123168/)
### Rationale and extra context
Adding `use core::mem::align_of` (or `std::…`) and using it that way will not trigger `unused_imports` (warn-by-default) on 1.80 which is somewhat weird. Applying the fixes for `unused_qualifications` changes the MSRV while `unused_imports` doesn't?
I somewhat expect this to be closed as won't fix as `rustc` itself doesn't care about cargo related settings?
### Other cases
_No response_
### Rust Version
```Shell
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-unknown-linux-gnu
release: 1.80.1
LLVM version: 18.1.7
```
### Anything else?
_No response_ | A-lints,A-diagnostics,T-compiler,L-unused_qualifications | low | Minor |
2,475,478,259 | electron | [Bug]: webContents.isLoading() is true when did-finish-load fires (or promise of loadURL()/loadFile() resolves) | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
31.4.0
### What operating system(s) are you using?
Windows, macOS
### Operating System Version
Windows 11 Pro 22631.4037, macOS 14.6.1
### What arch are you using?
x64, arm64
### Last Known Working Electron version
_No response_
### Expected Behavior
After the `did-finish-load` event fires `webContents.isLoading()` should return `false`.
### Actual Behavior
`webContents.isLoading()` returns `true` when the `did-finish-load` event fires. It's possible to use `setImmediate()` to workaround this issue.
The same issue exists when awaiting the promise of `loadURL()` or `loadFile()`.
### Testcase Gist URL
https://gist.github.com/florianreinhart/a74f9f1e0d69251d1ab093db7c1ab599
### Additional Information
_No response_ | platform/windows,bug :beetle:,has-repro-gist,31-x-y | low | Critical |
2,475,517,328 | opencv | get different result while invoke connectedComponentsWithStats function with multi thread | ### System Information
```
-- General configuration for OpenCV 4.10.0 =====================================
-- Version control: unknown
--
-- Platform:
-- Timestamp: 2024-08-20T11:38:00Z
-- Host: Linux 3.10.0-1160.118.1.el7.x86_64 x86_64
-- CMake: 3.28.1
-- CMake generator: Unix Makefiles
-- CMake build tool: /usr/bin/gmake
-- Configuration: Release
--
-- CPU/HW features:
-- Baseline: SSE SSE2 SSE3
-- requested: SSE3
-- Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2
-- requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
-- SSE4_1 (18 files): + SSSE3 SSE4_1
-- SSE4_2 (2 files): + SSSE3 SSE4_1 POPCNT SSE4_2
-- FP16 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
-- AVX (9 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX
-- AVX2 (38 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2
--
-- C/C++:
-- Built as dynamic libs?: NO
-- C++ standard: 11
-- C++ Compiler: /usr/bin/c++ (ver 4.8.5)
-- C++ flags (Release): -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wsign-promo -Wuninitialized -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-missing-field-initializers -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
-- C++ flags (Debug): -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wsign-promo -Wuninitialized -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-missing-field-initializers -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
-- C Compiler: /usr/bin/cc
-- C flags (Release): -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wuninitialized -Wno-unnamed-type-template-args -Wno-comment -Wno-missing-field-initializers -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
-- C flags (Debug): -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wuninitialized -Wno-unnamed-type-template-args -Wno-comment -Wno-missing-field-initializers -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
-- Linker flags (Release): -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined
-- Linker flags (Debug): -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined
-- ccache: NO
-- Precompiled headers: NO
-- Extra dependencies: /lib64/libjpeg.so /lib64/libpng.so /lib64/libtiff.so /lib64/libz.so dl m pthread rt
-- 3rdparty dependencies: libprotobuf ade ittnotify libwebp libopenjp2 IlmImf ippiw ippicv
--
-- OpenCV modules:
-- To be built: calib3d core dnn features2d flann gapi highgui imgcodecs imgproc ml objdetect photo stitching ts video videoio
-- Disabled: world
-- Disabled by dependency: -
-- Unavailable: java python2 python3
-- Applications: tests perf_tests apps
-- Documentation: NO
-- Non-free algorithms: NO
--
-- GUI: NONE
-- GTK+: NO
-- VTK support: NO
--
-- Media I/O:
-- ZLib: /lib64/libz.so (ver 1.2.7)
-- JPEG: /lib64/libjpeg.so (ver 62)
-- WEBP: build (ver encoder: 0x020f)
-- PNG: /lib64/libpng.so (ver 1.5.13)
-- TIFF: /lib64/libtiff.so (ver 42 / 4.0.3)
-- JPEG 2000: build (ver 2.5.0)
-- OpenEXR: build (ver 2.3.0)
-- HDR: YES
-- SUNRASTER: YES
-- PXM: YES
-- PFM: YES
--
-- Video I/O:
-- DC1394: NO
-- FFMPEG: NO
-- avcodec: NO
-- avformat: NO
-- avutil: NO
-- swscale: NO
-- avresample: NO
-- GStreamer: NO
-- v4l/v4l2: YES (linux/videodev2.h)
--
-- Parallel framework: pthreads
--
-- Trace: YES (with Intel ITT)
--
-- Other third-party libraries:
-- Intel IPP: 2021.11.0 [2021.11.0]
-- at: /data/users/guanyang/dev_srcs/opencv-4.10.0/build/3rdparty/ippicv/ippicv_lnx/icv
-- Intel IPP IW: sources (2021.11.0)
-- at: /data/users/guanyang/dev_srcs/opencv-4.10.0/build/3rdparty/ippicv/ippicv_lnx/iw
-- VA: NO
-- Lapack: NO
-- Eigen: NO
-- Custom HAL: NO
-- Protobuf: build (3.19.1)
-- Flatbuffers: builtin/3rdparty (23.5.9)
--
-- OpenCL: YES (no extra features)
-- Include path: /data/users/guanyang/dev_srcs/opencv-4.10.0/3rdparty/include/opencl/1.2
-- Link libraries: Dynamic load
--
-- Python (for build): /usr/bin/python3
--
-- Java:
-- ant: NO
-- Java: NO
-- JNI: NO
-- Java wrappers: NO
-- Java tests: NO
--
-- Install to: /data/users/guanyang/dev_libs/opencv/4.10.0
```
### Detailed description
When I use cv::connectedComponentsWithStats function in multi thread,will get random result every,and it even change the input image value,but the function signature is cont cv::Mat&! And I changed threads of cv to 1,the result is same each time,and never chane the input mat value,so the parallel version may have some impl which is not very correct!
### Steps to reproduce
you just invoke this function with a large image in multi thread
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: imgproc,confirmed | low | Critical |
2,475,536,339 | flutter | App shows a blank page when there is an exception in appState (Using go_router and Riverpod) | ### What package does this bug report belong to?
go_router
### What target platforms are you seeing this bug on?
Android, Web, Windows
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_riverpod:
dependency: "direct main"
description:
name: flutter_riverpod
sha256: "0f1974eff5bbe774bf1d870e406fc6f29e3d6f1c46bd9c58e7172ff68a785d7d"
url: "https://pub.dev"
source: hosted
version: "2.5.1"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
go_router:
dependency: "direct main"
description:
name: go_router
sha256: ddc16d34b0d74cb313986918c0f0885a7ba2fc24d8fb8419de75f0015144ccfe
url: "https://pub.dev"
source: hosted
version: "14.2.3"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
logging:
dependency: transitive
description:
name: logging
sha256: "623a88c9594aa774443aa3eb2d41807a48486b5613e67599fb4c41c0ad47c340"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
riverpod:
dependency: transitive
description:
name: riverpod
sha256: f21b32ffd26a36555e501b04f4a5dca43ed59e16343f1a30c13632b2351dfa4d
url: "https://pub.dev"
source: hosted
version: "2.5.1"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
state_notifier:
dependency: transitive
description:
name: state_notifier
sha256: b8677376aa54f2d7c58280d5a007f9e8774f1968d1fb1c096adcb4792fba29bb
url: "https://pub.dev"
source: hosted
version: "1.0.0"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: f652077d0bdf60abe4c1f6377448e8655008eef28f128bc023f7b5e8dfeb48fc
url: "https://pub.dev"
source: hosted
version: "14.2.4"
sdks:
dart: ">=3.5.0 <4.0.0"
flutter: ">=3.18.0-18.0.pre.54"
```
</details>
### Steps to reproduce
```
Here are the steps to reproduce the "Duplicate GlobalKey detected in widget tree" error in the provided Flutter app:
Run the App:
Launch the Flutter app by running the main.dart file.
Login to the App:
On the LoginPage, click the Login button to authenticate. This will redirect you to the HomePage.
Navigate to Settings Page:
Use the bottom navigation bar to switch to the Settings page.
Trigger the Error:
On the SettingsPage, click the Log Out That Throws Exception button.
Observe the Error:
The error will be thrown with a message indicating a "Duplicate GlobalKey detected in the widget tree." This error occurs due to the GlobalKey being reused improperly when the state is not correctly managed after the exception is thrown during sign-out.
These steps should consistently reproduce the error.
```
Here is a full repo link for detailed analysis: https://github.com/Zia-Ch/test_project
### Expected results
I am using Riverpod as state management and go_router for routing.
I have two pages three pages one is Login, second is Home and third is Settings.
Home and Settings are being displayed with a bottom nav bar.
after login my app directs to the main dashboard called as mainPage in my app.
And on settings page there are two button on is normal signOut but the other throws an exception (to reproduce the same effect as I faced).
When this exception is thrown app shows a blank page and throws an error as mentioned in actual results section.
### My Expected Results are:
App Should not Log Out and must remain on the same page without showing any changes.
### Actual results
Upon clicking on 'Logout that throws exception' Button, app shows a blank page along with bottom nav bar and shows an error as listed below:
```
FlutterError (Duplicate GlobalKey detected in widget tree.
The following GlobalKey was specified multiple times in the widget tree. This will lead to parts of the widget tree being truncated unexpectedly, because the second time a key is seen, the previous instance is moved to the new location. The key was:
- [GlobalObjectKey int#62661]
This was determined by noticing that after the widget with the above global key was moved out of its previous parent, that previous parent never updated during this frame, meaning that it either did not update at all or updated before the widget was moved, in either case implying that it still thinks that it should have a child with that global key.
The specific parent that did not update after having one or more children forcibly removed due to GlobalKey reparenting is:
- KeyedSubtree-[GlobalKey#d6f1a]
A GlobalKey can only be specified on one widget at a time in the widget tree.)
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter_riverpod/flutter_riverpod.dart';
import 'package:go_router/go_router.dart';
void main() {
runApp(const ProviderScope(child: MyApp()));
}
class MyApp extends ConsumerWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context, WidgetRef ref) {
final router = ref.watch(routeProvider);
return MaterialApp.router(
title: 'Flutter Test',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.blue),
useMaterial3: true,
),
routerConfig: router,
);
}
}
final GlobalKey<NavigatorState> _rootNavigatorKey =
GlobalKey<NavigatorState>(debugLabel: 'root');
final GlobalKey<NavigatorState> _shellNavigatorKey =
GlobalKey<NavigatorState>(debugLabel: 'shell');
final routeProvider = Provider((ref) {
final authState = ref.watch(authStateProvider);
return GoRouter(
navigatorKey: _rootNavigatorKey,
debugLogDiagnostics: true,
initialLocation: '/',
routes: <RouteBase>[
ShellRoute(
navigatorKey: _shellNavigatorKey,
builder: (context, state, child) => MainPage(
body: child,
),
routes: <RouteBase>[
GoRoute(
path: AppPages.home.toPath,
name: AppPages.home.name,
builder: (context, state) {
return const HomePage();
},
),
GoRoute(
path: AppPages.settings.toPath,
name: AppPages.settings.name,
builder: (context, state) {
return const SettingsPage();
},
),
],
),
GoRoute(
path: AppPages.login.toPath,
name: AppPages.login.name,
builder: (context, state) => const LoginPage(),
),
],
redirect: (context, state) {
final authenticated = authState.status == AuthStatus.authenticated;
if (!authenticated) {
return AppPages.login.toPath;
}
return null;
},
);
});
enum AppPages {
home(0),
settings(1),
login(2);
final int value;
const AppPages(this.value);
}
extension AppPagesX on int {
AppPages get toAppPage {
switch (this) {
case 0:
return AppPages.home;
case 1:
return AppPages.settings;
case 2:
return AppPages.login;
default:
return AppPages.home;
}
}
}
extension XPath on AppPages {
String get toPath {
switch (this) {
case AppPages.home:
return '/';
case AppPages.settings:
return '/settings';
case AppPages.login:
return '/login';
default:
return '/';
}
}
}
enum AuthStatus {
initial,
authenticated,
unathenticated,
}
class AuthState {
final AuthStatus status;
final String? errorMessage;
final String message;
AuthState({
required this.status,
this.errorMessage,
this.message = '',
});
factory AuthState.initial() {
return AuthState(
status: AuthStatus.initial,
message: 'Initial',
errorMessage: null,
);
}
factory AuthState.authenticated() {
return AuthState(
status: AuthStatus.authenticated,
message: 'Authenticated',
errorMessage: null,
);
}
factory AuthState.unathenticated() {
return AuthState(
status: AuthStatus.unathenticated,
message: 'Unauthenticated',
errorMessage: 'Something went wrong',
);
}
AuthState copyWith({
AuthStatus? status,
String? errorMessage,
String? message,
}) {
return AuthState(
status: status ?? this.status,
errorMessage: errorMessage,
message: message ?? this.message,
);
}
}
final authStateProvider =
StateNotifierProvider<AuthStateController, AuthState>((ref) {
return AuthStateController();
});
class AuthStateController extends StateNotifier<AuthState> {
AuthStateController() : super(AuthState.initial());
final AuthRepository _authRepository = AuthRepository();
void signIn() async {
await _authRepository.signIn();
state = AuthState.authenticated();
}
void signOut() async {
await _authRepository.signOut();
state = AuthState.unathenticated();
}
void signOutThrowsException() async {
try {
await _authRepository.signOutThrowsException();
} catch (_) {
state = state.copyWith(
status: AuthStatus.authenticated,
errorMessage: 'Something went wrong',
message: 'stillAuthenticated',
);
}
}
}
class AuthRepository {
AuthRepository();
Future<bool> signIn() async {
return true;
}
Future<bool> signOut() async {
return true;
}
Future<bool> signOutThrowsException() {
try {
throw Exception('Something went wrong');
} catch (_) {
throw Exception('Something went wrong');
}
}
}
final selectedPageProvider = StateProvider<AppPages>((ref) {
return AppPages.home;
});
class MainPage extends ConsumerWidget {
const MainPage({required this.body, super.key});
final Widget body;
@override
Widget build(BuildContext context, WidgetRef ref) {
final page = ref.watch(selectedPageProvider);
return Scaffold(
body: body,
bottomNavigationBar: NavigationBar(
onDestinationSelected: (value) {
final page = value.toAppPage;
ref.read(selectedPageProvider.notifier).state = page;
context.go(page.toPath);
},
selectedIndex: page.index,
destinations: destinations,
),
);
}
}
List<NavigationDestination> destinations = const [
NavigationDestination(icon: Icon(Icons.home), label: "Home"),
NavigationDestination(icon: Icon(Icons.settings), label: "Settings"),
];
class HomePage extends StatelessWidget {
const HomePage({super.key});
@override
Widget build(BuildContext context) {
return const Scaffold(
body: Center(
child: Text('Home'),
),
);
}
}
class SettingsPage extends ConsumerWidget {
const SettingsPage({super.key});
@override
Widget build(BuildContext context, WidgetRef ref) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
const Text('Settings'),
const SizedBox(height: 10),
ElevatedButton(
onPressed: () {
ref.read(authStateProvider.notifier).signOut();
},
child: const Text("Simple Logout"),
),
const SizedBox(height: 10),
ElevatedButton(
onPressed: () {
ref.read(authStateProvider.notifier).signOutThrowsException();
},
child: const Text('Log Out That Throws Exception'),
),
],
),
),
);
}
}
class LoginPage extends ConsumerWidget {
const LoginPage({super.key});
@override
Widget build(BuildContext context, WidgetRef ref) {
return Scaffold(
body: Center(
child: ElevatedButton(
onPressed: () {
ref.read(authStateProvider.notifier).signIn();
},
child: const Text("Login"),
),
),
);
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/994b5b29-027c-4eb7-81c5-f4c065d6e237
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.0, on Microsoft Windows [Version 10.0.22621.4037], locale en-PK)
[✓] Windows Version (Installed version of Windows is version 10 or higher)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Chrome - develop for the web
[✓] Visual Studio - develop Windows apps (Visual Studio Build Tools 2022 17.6.5)
[✓] Android Studio (version 2022.2)
[✓] VS Code (version 1.92.2)
[✓] Connected device (3 available)
[✓] Network resources
• No issues found!
```
</details>
| package,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.25 | low | Critical |
2,475,562,958 | rust | We should document the exact minimum versions of build tools required to build a given Rust toolchain | As noticed in https://github.com/rust-lang/rust/pull/128722#issuecomment-2297605573 when trying to bump `cc` to a version which [drops support for VS 2013](https://github.com/rust-lang/cc-rs/pull/1046).
Also cc:
> We also don't document minium versions of binutils etc on the platform support page. Maybe we should! But that's a pre-existing issue and bumping cc or not doesn't change that.
_Originally posted by @ChrisDenton in https://github.com/rust-lang/rust/issues/129290#issuecomment-2298050637_
`INSTALL.md` exists, but that seems more like some general advice for getting started, but not actual documented guarantees in terms of what kind of build environment is supported. | C-enhancement,T-compiler,T-bootstrap,A-docs | low | Major |
2,475,587,913 | flutter | [Windows 11] Flutter debugging cannot create symlinks when using packages | ### Steps to reproduce
1. Install Flutter, projects with no plugins build normally as expected
2. Add any package to a project (in my case it's media_kit)
3. Attempt to run debugging - fails with the following error:
`Error: ERROR_ACCESS_DENIED file system exception thrown while trying to create a symlink from source to dest`
Project is on NTFS C: drive, Flutter SDK is also on C: drive. Developer mode is ON. No antivirus is being used. I have already run `fsutil behavior set SymlinkEvaluation L2L:1 R2R:1 L2R:1 R2L:1` as described in [this](https://github.com/flutter/flutter/issues/120196#issuecomment-1424194604) comment.
Visual Studio Code is installed for all users.
Project is only for a non-admin user.
Flutter SDK is installed for non-admin user.
Android SDK/Studio is not installed as there are no plans for Android support for the project.
I have not tried running VSC as administrator, because I do not want to develop the project on the administrator account. A normal account should be enough.
I can make manual symlinks as a normal user through Command prompt, but not through PowerShell.
### Actual results
Debugging fails with the following error:
`Error: ERROR_ACCESS_DENIED file system exception thrown while trying to create a symlink from source to dest`
### Logs
<details open>
<summary>Logs</summary>
```console
Launching lib\main.dart on Windows in debug mode...
[+4199 ms] Error: ERROR_ACCESS_DENIED file system exception thrown while trying to create a symlink from source to dest
[ +4 ms] "flutter run" took 5 325ms.
[ +8 ms]
#0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3)
#1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:874:9)
<asynchronous suspension>
#2 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1408:27)
<asynchronous suspension>
#3 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#4 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
<asynchronous suspension>
#5 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:420:9)
<asynchronous suspension>
#6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5)
<asynchronous suspension>
#8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:130:9)
<asynchronous suspension>
#9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#10 main (package:flutter_tools/executable.dart:93:3)
<asynchronous suspension>
Exited (1).
```
</details>
### Flutter Doctor output
<details open>
<summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.0, on Microsoft Windows [Version 10.0.22631.4037], locale cs-CZ)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[X] Android toolchain - develop for Android devices
X Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/to/windows-android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[X] Chrome - develop for the web (Cannot find Chrome executable at .\Google\Chrome\Application\chrome.exe)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.6)
[!] Android Studio (not installed)
[√] Connected device (2 available)
[√] Network resources
! Doctor found issues in 3 categories.
```
</details>
| c: crash,tool,platform-windows,P2,a: plugins,team-windows,triaged-windows | low | Critical |
2,475,608,689 | material-ui | [colorManipulator] Theme colors in the new syntax without comma (`rgb(106 27 154)`) fail to generate correct contrast color for buttons | ### Steps to reproduce
Link to live example: [codesandbox](https://codesandbox.io/s/fancy-fast-3cknx2?file=/src/Demo.tsx)
Steps:
1. Assign a dark theme color in the newer syntax using the `rgb` color function, for example `rgb(106 27 154)`
2. Render a contained button that uses this theme color as its background: `<Button variant="contained">click me</Button>`
3. Observe the text color being dark as well
4. Change the theme color to the legacy syntax with commas: `rgb(106, 27, 154)`
5. Observe the button now having a white contrast text color
### Current behavior
A dark rgb theme color in the new color syntax results in the wrong contrast text being applied:
<img width="127" alt="CleanShot 2024-08-20 at 15 01 32@2x" src="https://github.com/user-attachments/assets/f9095d93-1b02-4f1d-93a4-a7188781f9bf">
I reckon this might affect other areas as well, but I have only identified this behavior for the `Button` component
### Expected behavior
I expect a button with a dark theme color background to have white contrast text:
<img width="127" alt="CleanShot 2024-08-20 at 15 03 31@2x" src="https://github.com/user-attachments/assets/2a7b5d9e-6471-4582-8242-273781c03cf9">
### Context
_No response_
### Your environment
Chrome Version 127.0.6533.120 (Official Build) (arm64)
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 12.4
Binaries:
Node: 18.13.0 - ~/.nvm/versions/node/v18.13.0/bin/node
npm: 8.19.3 - ~/.nvm/versions/node/v18.13.0/bin/npm
pnpm: Not Found
Browsers:
Chrome: 127.0.6533.120
Edge: Not Found
Safari: 15.5
npmPackages:
@emotion/react: 11.13.0
@emotion/styled: 11.13.0
@mui/base: 5.0.0-beta.40
@mui/core-downloads-tracker: 5.16.6
@mui/icons-material: 5.16.6
@mui/lab: 5.0.0-alpha.173
@mui/material: 5.16.6
@mui/private-theming: 5.16.6
@mui/styled-engine: 5.16.6
@mui/system: 5.16.6
@mui/types: 7.2.15
@mui/utils: 5.16.6
@mui/x-date-pickers: 7.12.0
@types/react: 18.3.3
react: 18.3.1
react-dom: 18.3.1
styled-components: 6.1.12
typescript: ^5.5.4 => 5.5.4
```
</details>
**Search keywords**: contrast, comma, rgb | package: system,enhancement,customization: css | low | Minor |
2,475,626,384 | next.js | Error when using `experimental.workerThreads` and webpack config | ### Link to the code that reproduces this issue
https://github.com/Janpot/next-worker-threads-repro
### To Reproduce
1. Clone https://github.com/Janpot/next-worker-threads-repro
1. Run `pnpm install`
1. Run `pnpm build`
The Next.js config combines custom `webpack` with `experimental.workerThreads`.
```tsx
// ./next.config.mjs
/** @type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
experimental: {
workerThreads: true,
},
webpack: (config) => config,
};
export default nextConfig;
```
### Current vs. Expected behavior
It currently throws an error:
```
▲ Next.js 15.0.0-canary.121
- Experiments (use with caution):
· workerThreads
✓ Linting and checking validity of types
Creating an optimized production build ...
✓ Compiled successfully
✓ Collecting page data
> Build error occurred
DOMException [DataCloneError]: (config) => config could not be cloned.
at new DOMException (node:internal/per_context/domexception:53:5)
at Worker.postMessage (node:internal/worker:366:5)
at ExperimentalWorker.send (~/node_modules/.pnpm/next@15.0.0-canary.121_react-dom@19.0.0-rc-d025ddd3-20240722_react@19.0.0-rc-d025ddd3-2024072_2xa4b5m2p52gsklkteytl7vbtu/node_modules/next/dist/compiled/jest-worker/index.js:1:17480)
at WorkerPool.send (~/node_modules/.pnpm/next@15.0.0-canary.121_react-dom@19.0.0-rc-d025ddd3-20240722_react@19.0.0-rc-d025ddd3-2024072_2xa4b5m2p52gsklkteytl7vbtu/node_modules/next/dist/compiled/jest-worker/index.js:1:6183)
at Farm._process (~/node_modules/.pnpm/next@15.0.0-canary.121_react-dom@19.0.0-rc-d025ddd3-20240722_react@19.0.0-rc-d025ddd3-2024072_2xa4b5m2p52gsklkteytl7vbtu/node_modules/next/dist/compiled/jest-worker/index.js:1:2094)
at Farm._push (~/node_modules/.pnpm/next@15.0.0-canary.121_react-dom@19.0.0-rc-d025ddd3-20240722_react@19.0.0-rc-d025ddd3-2024072_2xa4b5m2p52gsklkteytl7vbtu/node_modules/next/dist/compiled/jest-worker/index.js:1:2278)
at ~/node_modules/.pnpm/next@15.0.0-canary.121_react-dom@19.0.0-rc-d025ddd3-20240722_react@19.0.0-rc-d025ddd3-2024072_2xa4b5m2p52gsklkteytl7vbtu/node_modules/next/dist/compiled/jest-worker/index.js:1:1712
at new Promise (<anonymous>)
at Farm.doWork (~/node_modules/.pnpm/next@15.0.0-canary.121_react-dom@19.0.0-rc-d025ddd3-20240722_react@19.0.0-rc-d025ddd3-2024072_2xa4b5m2p52gsklkteytl7vbtu/node_modules/next/dist/compiled/jest-worker/index.js:1:1257)
at Worker._callFunctionWithArgs (~/node_modules/.pnpm/next@15.0.0-canary.121_react-dom@19.0.0-rc-d025ddd3-20240722_react@19.0.0-rc-d025ddd3-2024072_2xa4b5m2p52gsklkteytl7vbtu/node_modules/next/dist/compiled/jest-worker/index.js:1:23770)
Generating static pages (0/3) [ ] ELIFECYCLE Command failed with exit code 1.
```
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:21 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 18.20.2
npm: 10.5.0
Yarn: 1.22.22
pnpm: 9.7.1
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: export
```
### Which area(s) are affected? (Select all that apply)
Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
Discovered in https://github.com/mui/material-ui/pull/42824#issuecomment-2298562178 | bug,Output (export/standalone) | low | Critical |
2,475,672,476 | vscode | [12f] potential listener LEAK detected, having 1029 listeners already. MOST frequent listener (792): | ```javascript
Error
at c.create in src/vs/base/common/event.ts:921:15
at h.q [as onDidChange] in src/vs/base/common/event.ts:1128:34
at Object.A [as onWillAddFirstListener] in src/vs/platform/actions/common/menuService.ts:425:34
at u.q [as onDidChange] in out/vs/workbench/workbench.desktop.main.js:91:1280
at D.G in src/vs/workbench/contrib/notebook/browser/view/cellParts/cellOutput.ts:352:36
at p._runFn in src/vs/workbench/contrib/notebook/browser/view/cellParts/cellOutput.ts:218:11
at p.j in src/vs/base/common/observableInternal/autorun.ts:196:10
at p.endUpdate in src/vs/base/common/observableInternal/autorun.ts:237:10
at d.finish in src/vs/base/common/observableInternal/base.ts:372:13
at s.set in src/vs/base/common/observableInternal/base.ts:445:9
at y.setVisible in src/vs/workbench/contrib/notebook/browser/viewModel/cellOutputViewModel.ts:31:16
at a.updateOutputHeight in src/vs/workbench/contrib/notebook/browser/viewModel/codeCellViewModel.ts:449:34
at Pe.Wc in src/vs/workbench/contrib/notebook/browser/notebookEditorWidget.ts:3039:9
at u.value in src/vs/workbench/contrib/notebook/browser/view/renderers/backLayerWebView.ts:623:29
at o.B in src/vs/base/common/event.ts:1230:13
at o.fire in src/vs/base/common/event.ts:1261:9
at <anonymous> in src/vs/workbench/contrib/webview/browser/webviewElement.ts:198:20
at handler in src/vs/workbench/contrib/webview/browser/webviewElement.ts:510:35
at Set.forEach (<anonymous>)
at H.onmessage in src/vs/workbench/contrib/webview/browser/webviewElement.ts:510:16
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=fee1edb8d6d72a0ddff41e5f71a671c23ed924b9&bH=ba511a03-9df7-9d4f-76b4-5e17f6634dc8) | error-telemetry,freeze-slow-crash-leak | low | Critical |
2,475,675,705 | vscode | InstantiationService has been disposed | ```javascript
Error: InstantiationService has been disposed
at g.m in src/vs/platform/instantiation/common/instantiationService.ts:69:10
at g.invokeFunction in src/vs/platform/instantiation/common/instantiationService.ts:90:8
at H.D in src/vs/workbench/contrib/debug/browser/breakpointEditorContribution.ts:566:66
at V.a in src/vs/workbench/contrib/debug/browser/breakpointEditorContribution.ts:232:66
at V.h in out/vs/workbench/workbench.desktop.main.js:97:20944
at V.g in src/vs/base/common/async.ts:1033:9
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=fee1edb8d6d72a0ddff41e5f71a671c23ed924b9&bH=bed3ce91-3c80-8072-ddf1-6de94211ecb4) | bug,debug,error-telemetry | low | Critical |
2,475,676,689 | vscode | FileLocationKind is not actionable. Does the matcher have a filePrefix? This should never happen. | ```javascript
Error: FileLocationKind is not actionable. Does the matcher have a filePrefix? This should never happen.
at o in src/vs/workbench/contrib/tasks/common/problemMatcher.ts:234:9
at $NI in src/vs/workbench/contrib/tasks/common/problemMatcher.ts:206:27
at $NI in src/vs/workbench/contrib/tasks/common/problemMatcher.ts:409:10
at T.f in src/vs/workbench/contrib/tasks/common/problemMatcher.ts:398:21
at T.handle in src/vs/workbench/contrib/tasks/common/problemMatcher.ts:503:23
at p.L in src/vs/workbench/contrib/tasks/common/problemCollectors.ts:201:28
at p.H in src/vs/workbench/contrib/tasks/common/problemCollectors.ts:165:17
at p.G in src/vs/workbench/contrib/tasks/common/problemCollectors.ts:376:28
at <anonymous> in src/vs/workbench/contrib/tasks/common/problemCollectors.ts:122:17
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=fee1edb8d6d72a0ddff41e5f71a671c23ed924b9&bH=87dfc473-af15-a401-ec27-1a9e65dc60bc) | error-telemetry,debt | low | Critical |
2,475,677,269 | vscode | Model is disposed! | ```javascript
Error: Model is disposed!
at U.ib in src/vs/editor/common/model/textModel.ts:421:10
at U.getLineFirstNonWhitespaceColumn in src/vs/editor/common/model/textModel.ts:865:8
at q in src/vs/workbench/contrib/debug/browser/breakpointEditorContribution.ts:176:29
at createCandidateDecorations in src/vs/workbench/contrib/debug/browser/breakpointEditorContribution.ts:541:40
at setCandidateDecorations in src/vs/workbench/contrib/debug/browser/breakpointEditorContribution.ts:603:6
at callback in src/vs/editor/common/model/textModel.ts:1622:13
at U.changeDecorations in src/vs/editor/common/model/textModel.ts:1592:16
at x.changeDecorations in src/vs/editor/browser/widget/codeEditor/codeEditorWidget.ts:1257:32
at H.D in src/vs/workbench/contrib/debug/browser/breakpointEditorContribution.ts:581:21
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=fee1edb8d6d72a0ddff41e5f71a671c23ed924b9&bH=a41df17e-79de-41a8-7799-5f6c4a4f0778) | bug,debug,error-telemetry | low | Critical |
2,475,684,313 | deno | Tracking: stabilization of "patch" feature | ```jsonc
// deno.json
{
"patch": [
"../some-package-or-workspace"
]
}
```
https://github.com/denoland/deno/pull/25068
Questions:
- [ ] Is "patch" an ok name? Isn't it confusing with the idea of "patching a package" where you modify a package in the vendor or node_modules directory? What better alternatives are there?
- [ ] Can this work with git repositories? How does that work?
- [ ] What about patching to a remote specifier?
To implement:
- [ ] Patching npm packages.
- [ ] Improving error messages to recommend using the patch feature (https://github.com/denoland/deno/issues/26704) | feat | low | Critical |
2,475,688,622 | flutter | [web] better document the value of multi-threaded rendering; provide examples in the cookbook | ### Use case
I was looking at game engines like Unity and Godot recently, and I noticed that when these engines target the web, they _require_ some technologies, such as `SharedArrayBuffer` (and the associated Cross Origin Isolation). From what I understand, these technologies enable efficient WASM multithreading. (A good primer on this tech is [in this Godot Engine article](https://godotengine.org/article/progress-report-web-export-in-4-3/#single-threaded-web-export).)
Requiring things like `SharedArrayBuffer` brings a problem: they are disabled by default on most servers, and they will probably stay disabled. The thing is, when SharedArrayBuffer is enabled on a page, some pretty basic web things stop working. For example, embedding (such as embedding a YouTube video or an ad). Also, even if you decide to enable it, it's non-trivial on localhost (needs SSH), and hard to impossible on some popular hosting platforms (e.g. GitHub Pages doesn't have that option).
So, on one hand, the fact that Flutter _isn't_ using `SharedArrayBuffer` is great, because you can have Flutter apps and games running on any page, using almost any server config. This could even be a competitive advantage in the casual web game space.
On the other hand, `SharedArrayBuffer` and multithreading might be a low-hanging fruit for optimization that Flutter is currently not using. There's probably a reason why both Unity and Godot require it.
// cc @zoeyfan
(For context, here are two previous issues mentioning SharedArrayBuffer: https://github.com/flutter/flutter/issues/126351 and https://github.com/flutter/flutter/issues/124334.)
### Proposal
As a developer, I'd like to have the ability to make my Flutter apps/games go faster on the web. Maybe there can be a flag that enables these technologies, so I can make the trade-off decision myself.
If the Flutter team decides not to explore `SharedArrayBuffer` et al. for the time being, then it would be good to know. At the very least, for games there's the competitive advantage of running anywhere on the web (in contrast to Unity and, [to a lesser extent](https://godotengine.org/article/progress-report-web-export-in-4-3/), Godot), and without the need to configure the server. | engine,platform-web,c: proposal,P2,e: web_skwasm,team-web,triaged-web | low | Major |
2,475,705,897 | next.js | Module incorrectly persists between hot updates (HMR) in the browser | ### Link to the code that reproduces this issue
https://github.com/mswjs/examples/pull/101
### To Reproduce
1. Clone the repo, check out the PR's branch.
2. `pnpm install`.
3. `cd examples/with-next`.
4. `pnpm dev`
5. Open the application URL in the browser.
6. Open the DevTools, select the "Console" tab.
7. Click the "Fetch movies" button on the page. See the list of fetched movies (these are coming from mocks). See _a single_ log output from MSW about the intercepted GraphQL query.
8. Go to `src/mocks/handlers.ts`. Change the payload of the `graphql.query()` handler (e.g. remove any word from a movie title).
9. Save the changes.
10. Back in the browser, click "Fetch movies" again.
11. See _two_ logs for the same GraphQL request.
### Current vs. Expected behavior
## Current behavior
The entire `MovieList` component gets re-rendered a bunch of times on hot update to `handlers.ts`. Re-rendering is expected, but it looks like Next.js re-applies event listeners to the same button multiple times.
**This is not an MSW issue**. You can log something in the `MovieList` component manually, and see that it re-renders quite a lot. I suspect during those re-renderings, the `onClick` listener gets applied more than it needs to.
The number of times the listener is excessively applied is directly proportionate to the number of HMR changes issued (e.g. 1 change = 2 listeners; 2 changes = 3 listeners; etc).
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
Available memory (MB): 65536
Available CPU cores: 16
Binaries:
Node: 18.19.0
npm: 10.2.3
Yarn: 1.22.10
pnpm: 9.6.0
Relevant Packages:
next: 15.0.0-canary.121 // Latest available version is detected (15.0.0-canary.121).
eslint-config-next: N/A
react: 19.0.0-rc-14a4699f-20240725
react-dom: 19.0.0-rc-14a4699f-20240725
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Runtime,linear: next | medium | Critical |
2,475,729,374 | pytorch | `torch.quantile`: `nan` instead of `inf` | ### 🐛 Describe the bug
Hello, I stumbled upon this problem with `quantile`:
```pycon
>>> t = torch.tensor([0, torch.inf])
>>> q = torch.tensor([.4, .6])
```
```pycon
>>> for interpolation in ["linear", "lower", "higher", "nearest", "midpoint"]:
... print(interpolation, ":", t.quantile(q, interpolation=interpolation))
...
linear : tensor([inf, nan])
lower : tensor([0., 0.])
higher : tensor([inf, inf])
nearest : tensor([0., inf])
midpoint : tensor([nan, nan])
```
One could argue that the last line is "ok" if we decide to make the mid point between `0` and `inf` to be `nan` (which I personally feel is wrong, I would prefer `inf`). But in any case, I think there's no logical reason for the `"linear"` interpolation to change behaviour between `.4` and `.6` quantile.
What do you think?
Cheers!
Elie
### Versions
```pycon
>>> torch.__version__
'2.3.0+cu121'
```
cc @albanD | triaged,module: python frontend | low | Critical |
2,475,754,251 | yt-dlp | [YouTube] Better handling of missing channel tabs, and the videos tab specifically | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
USA
### Provide a description that is worded well enough to be understood
If you download an empty playlist (with 0 videos), yt-dlp doesn't throw an error. I think that makes the most sense and that behavior should also be used for when a channel doesn't have a tab specified in a URL. So when trying to download `<channel URL>/shorts` on a channel that doesn't have a shorts tab, it shouldn't throw an error and just consider it successfully downloaded... since you technically did successfully download all 0 shorts that were available. A warning could optionally be output to console as well.
Another more specific issue is that Topic channels don't have a videos tab, but they still can have uploaded videos. In this instance, I think yt-dlp should instead try to download the channel's uploads playlist. [Here's a comment describing how a playlist of a channel's uploads can be found](https://github.com/yt-dlp/yt-dlp/issues/9673#issuecomment-2298830799).
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
This bug is related to the semantics/behavior of yt-dlp
```
| incomplete,site-enhancement,site:youtube | low | Critical |
2,475,819,620 | PowerToys | button to close all tasks instead of doing it one by one for Locksmith | ### Description of the new feature / enhancement
In addition to closing tasks one by one, it would also be useful and practical to close all tasks at once with a single button.
### Scenario when this would be used?
Every time you want to move or delete a file and you have to close the associated tasks one by one, this is slow, it will be faster by implementing this option.
### Supporting information

as you can see in this case, closing one by one is tedious and annoying, you could close all tasks with a single button. | Needs-Triage | low | Major |
2,475,819,737 | electron | [Bug]: -webkit-app-region: drag has no effect | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
31.4.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11
### What arch are you using?
x64
### Last Known Working Electron version
30.4.0
### Expected Behavior
When creating a window with `titleBarStyle: 'hidden'` and an area having the css property `-webkit-app-region: drag;` I expect that the window is draggable on this region.
### Actual Behavior
With version 31.4.0 the window is no longer draggable within this region. It behaves as if the css property was not set.
### Testcase Gist URL
[Git repo with testcase for reproduction](https://github.com/2WeltenChris/electron-draggable-testcase) (see comment below for further information)
### Additional Information
Might be related to #41212. | platform/windows,bug :beetle:,stale,31-x-y | low | Critical |
2,475,832,569 | tensorflow | tf.sparse.reduce_sum error in JIT | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
v2.17.0-rc1-2-gad6d8cc177d
### Custom code
Yes
### OS platform and distribution
Ubuntu Mate 22.04
### Mobile device
_No response_
### Python version
Python 3.10.12
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
CUDA 12.3
### GPU model and memory
_No response_
### Current behavior?
Error when using tf.sparse.reduce_sum and JIT compilation.
I have written a layer that passes all my unit tests except when I use in a model with predict or train.
Would that make sense with the JIT compilation on? I am not fully certain how and when this works.
Anyway, I am not exactly sure why my layer code fails, but I think that this minimum reproducible example captures the issue.
### Standalone code to reproduce the issue
```shell
import numpy as np
import tensorflow as tf
batch_size = 4
input_shape = (3, 3)
indices = np.array([[0, 0, 0], [0, 0, 1], [1, 0, 0], [1, 0, 1], [2, 0, 0], [2, 0, 1], [3, 0, 0], [3, 0, 1]])
inputs = tf.sparse.SparseTensor(dense_shape=(batch_size, *input_shape),
indices=indices,
values=[9, 1, 9, 1, 9, 1, 9, 1])
@tf.function(input_signature=[tf.SparseTensorSpec(
shape=(4, 3, 3),
dtype=tf.dtypes.int32)], jit_compile=True)
def get_batch_sum(inputs):
# same problem with tf.sparse.reduce_max
return tf.sparse.reduce_sum(tf.sparse.reduce_sum(inputs, axis=1, output_is_sparse=True), axis=1)
sum_out = get_batch_sum(inputs)
print(sum_out)
```
### Relevant log output
```shell
2024-08-20 16:31:01.690779: W tensorflow/core/framework/op_kernel.cc:1840] OP_REQUIRES failed at xla_ops.cc:577 : INVALID_ARGUMENT: Detected unsupported operations when trying to compile graph __inference_get_batch_sum_15[_XlaMustCompile=true,config_proto=6001324581131673121,executor_type=11160318154034397263] on XLA_GPU_JIT: SparseReduceSumSparse (No registered 'SparseReduceSumSparse' OpKernel for XLA_GPU_JIT devices compatible with node {{node SparseReduceSumSparse}}){{node SparseReduceSumSparse}}
The op is created at:
File ".config/JetBrains/PyCharmCE2023.3/scratches/scratch_30.py", line 21, in <module>
File ".config/JetBrains/PyCharmCE2023.3/scratches/scratch_30.py", line 18, in get_batch_sum
tf2xla conversion failed while converting __inference_get_batch_sum_15[_XlaMustCompile=true,config_proto=6001324581131673121,executor_type=11160318154034397263]. Run with TF_DUMP_GRAPH_PREFIX=/path/to/dump/dir and --vmodule=xla_compiler=2 to obtain a dump of the compiled functions.
Traceback (most recent call last):
(...)
File ".config/JetBrains/PyCharmCE2023.3/scratches/scratch_30.py", line 18, in get_batch_sum
tf2xla conversion failed while converting __inference_get_batch_sum_15[_XlaMustCompile=true,config_proto=6001324581131673121,executor_type=11160318154034397263]. Run with TF_DUMP_GRAPH_PREFIX=/path/to/dump/dir and --vmodule=xla_compiler=2 to obtain a dump of the compiled functions. [Op:__inference_get_batch_sum_15]
```
| stat:awaiting tensorflower,type:bug,comp:ops,comp:xla,2.17 | medium | Critical |
2,475,833,673 | pytorch | associative_scan not composable with vmap in eager-mode | It uses torch.compile(backend=inductor) in eager-mode:
https://github.com/pytorch/pytorch/blob/36376efd06e286ee9514e374b0d5f88c206447b9/torch/_higher_order_ops/associative_scan.py#L79-L83 and we have a limitation that vmap over torch.compile doesn't work unless the backend is "eager"
cc @Chillee @samdow @kshitij12345 @janeyx99 @ydwu4 @penguinwu @bdhirsh @ezyang @chauhang | triaged,module: vmap,oncall: pt2,module: functorch,module: higher order operators,module: pt2-dispatcher | low | Minor |
2,475,846,980 | PowerToys | Numbering of windows in use always visible in keyboard shortcut guide | ### Description of the new feature / enhancement
the numbering of keyboard shortcuts for the windows that are in use is very useful and practical, but it is necessary to keep the Win key after this always visible, although you could customize the location so that it does not interfere with visibility
### Scenario when this would be used?
For those who use the keyboard a lot to manage between windows is useful the shortcut alt + tab but to be faster and knowing the workflow of each one is better Win + # (number of the window).
### Supporting information

Here is an example of how it could look, without having to hold down the key, that is, always in active state | Product-Shortcut Guide,Needs-Triage | low | Minor |
2,475,851,039 | vscode | Git - VS Code doesn't show "Merge branch" commits in "Timeline" and shows incorrect diffs for subsequent commits | Type: <b>Bug</b>
Here's how I was able to reproduce this.
1. Create a basic HTML document. I left out the `<head>` element for simplicity.
```
<!DOCTYPE html>
<html lang="en">
<body>
</body>
</html>
```
2. Initialize a repository and make the initial commit.
3. Create a branch, switch to that branch, add a `<main>` element, and commit.
```
<!DOCTYPE html>
<html lang="en">
<body>
<main></main>
</body>
</html>
```
4. Switch to the main branch, add a `<header>` element, and commit.
```
<!DOCTYPE html>
<html lang="en">
<body>
<header></header>
</body>
</html>
```
5. Merge the main branch with the other branch. Resolve the conflict by putting the `<header>` above the `<main>`. Commit with the default message (`Merge branch 'BRANCH_NAME'`).
```
<!DOCTYPE html>
<html lang="en">
<body>
<header></header>
<main></main>
</body>
</html>
```
6. Delete the other branch.
7. Add a `<footer>` element and commit.
```
<!DOCTYPE html>
<html lang="en">
<body>
<header></header>
<main></main>
<footer></footer>
</body>
</html>
```
Two problems appear here.
1. VS Code doesn't show the commit you made after resolving the conflict in "Timeline" in the left sidebar, while `git log` shows it.
2. If you click on the last commit in "Timeline," VS Code says you added `<main>` and `<footer>`, while you actually only added `<footer>`. Running `git diff COMMIT_1 COMMIT_2` returns the correct diffs.
VS Code version: Code 1.92.2 (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (8 x 2419)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.80GB (1.12GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (18)</summary>
Extension|Author (truncated)|Version
---|---|---
ruff|cha|2024.42.0
vscode-eslint|dba|3.0.10
css-property-sorter|Enz|1.2.3
prettier-vscode|esb|11.0.0
vscode-htmlhint|HTM|1.0.5
rainbow-csv|mec|3.12.0
debugpy|ms-|2024.10.0
python|ms-|2024.12.3
vscode-pylance|ms-|2024.8.1
autodocstring|njp|0.6.1
sourcery|sou|1.22.0
vscode-stylelint|sty|1.4.0
errorlens|use|3.20.0
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
vscode-icons|vsc|12.8.0
five-server|yan|0.3.1
html-css-class-completion|Zig|1.20.0
</details>
<!-- generated by issue reporter --> | bug,git,confirmed | low | Critical |
2,475,874,609 | vscode | Unable to pass `target: Workspace` to `workbench.action.openSettings2` command | I'm like to pop open the settings editor for a specific setting, but for the current workspace. Eg, this view:

I'm using the `workbench.action.openSettings2` command (to force the UI regardless of user settings) and passing a `query`. I'm also passing a `target` (which I found in `ISettingsEditorOptions` alongside `query`), however it seems to be ignored and just opens at the user settings:
```ts
await vs.commands.executeCommand("workbench.action.openSettings2", {
query: settingName,
target: vs.ConfigurationTarget.Workspace,
});
```

I also tried passing `5` instead of `vs.ConfigurationTarget.Workspace,` because there's another `ConfigurationTarget` enum inside VS Code defined as:
```ts
export const enum ConfigurationTarget {
APPLICATION = 1,
USER,
USER_LOCAL,
USER_REMOTE,
WORKSPACE, // 5
WORKSPACE_FOLDER,
DEFAULT,
MEMORY
}
```
However, nothing works. I saw there as a new `vscode://settings/xxx` handler in the latest VS Code, but it doesn't seem to have any option for workspace settings.
Is this a bug? Is there a way to do the equiv of "Open Workspace Settings" and force the UI, and provide a query? | bug,settings-editor,confirmation-pending | low | Critical |
2,475,937,184 | transformers | StoppingCriteria for Repetition | ### Feature request
similar to repetition_penalty for generation config, but as a stopping criteria.
### Motivation
(small?) models tend to generated endless loops of the same few tokens, or a combination where they only increase like a single digit. (could not find any similar FRs)
I run into this quite a lot when doing evaluation runs (with greedy decoding) for code completion tasks. here is a screenshot of multiple generations saved to a file. the blocks of repetition can easily be spotted.

Having a stopping criterion that detects such behaviour would massively speed up evaluation runs, since generation could stop early and not reach the `max_new_token` set. Some parameters might be helpful to expose like number of repetitions, and n-gram overlap for example.
### Your contribution
I am happy to contribute with a PR myself, but will not find the time to do so in the next ~6-8 weeks. It doesn't look straight forward, but I am also not too familiar with the deeper parts of the generation code - so it might take me a while. | Feature request | low | Minor |
2,475,942,774 | ant-design | The step element does not highlight on scrollIntoViewOptions: { behavior: 'smooth'} | ### Reproduction link
[](https://stackblitz.com/edit/antd-reproduce-5x-kxkrzf?file=demo.tsx)
### Steps to reproduce
The tour component does not highlight the step that is using { behavior: 'smooth'} in scrollIntoViewOptions. It is scrolling to the element but not highlighting the step area.
### What is expected?
The step area should not have the mask over.
### What is actually happening?
The step area is having a mask over.
| Environment | Info |
| --- | --- |
| antd | 5.20.2 |
| React | react |
| System | ubantu 22 |
| Browser | Chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,Inactive | low | Major |
2,475,942,813 | rust | `--emit metadata` produces less error messages with some targets | Hello. I was looking into a UI test failure with the `aarch64-unknown-nto-qnx710` target. (but this is not a QNX specific issue as noted below)
The test is question is [`/tests/ui/structs-enums/enum-rec/issue-17431-6.rs`](https://github.com/rust-lang/rust/blob/a971212545766fdfe0dd68e5d968133f79944a19/tests/ui/structs-enums/enum-rec/issue-17431-6.rs). The contents of the file have been copied below for your convenience.
``` rust
//@ ignore-apple: cycle error does not appear on apple
use std::sync::Mutex;
enum Foo { X(Mutex<Option<Foo>>) }
//~^ ERROR recursive type `Foo` has infinite size
//~| ERROR cycle detected
impl Foo { fn bar(self) {} }
fn main() {}
```
The UI test expects the compiler to produce TWO error messages (see `ERROR` comments in the code) but the compiler only produces ONE when the compilation target is `aarch64-unknown-nto-qnx710`.
**NOTE** The issue can also be reproduced without a QNX toolchain on a Linux host using the `aarch64-apple-darwin` target.
I have narrowed down the issue to `compiletest`'s internal use of the `--emit metadata` flag. In other words, `rustc --emit metadata --target aarch64-apple-darwin issue-17431-6.rs` produces ONE error; whereas `rustc --target aarch64-apple-darwin issue-17431-6.rs` (no `--emit` flag) produces the TWO error messages that the UI test expects.
I'm not sure why `--emit metadata` triggers the issue but I suspect that `Mutex` having OS / platform specific implementations (i.e. `cfg`-style conditional compilation is involved) is part of the problem. After all, the error that's not reported is about a cycle being found when computing the layout of `Foo`; the error includes `note`s that list all the types beneath `Foo`: that is `Mutex`, `UnsafeCell`, etc.
A workaround that fixes the UI test (when target = QNX) is to replace the `Mutex` type with `UnsafeCell`, that is
``` rust
use std::cell::UnsafeCell;
enum Foo { X(UnsafeCell<Option<Foo>>) }
```
This change may let you drop the `ignore-apple` compiletest attribute from the test but I haven't tested this change with an apple target.
Again, I suspect that this modified version does not run into the problem because `UnsafeCell` does NOT have a target specific implementation / layout.
### Meta
`rustc --version --verbose`:
```
rustc 1.82.0-nightly (636d7ff91 2024-08-19)
binary: rustc
commit-hash: 636d7ff91b9847d6d43c7bbe023568828f6e3246
commit-date: 2024-08-19
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
``` | A-testsuite,A-metadata,T-compiler,C-bug,A-atomic | low | Critical |
2,475,960,013 | vscode | Explorer not graying out .gitignore'd files if ignored folders (that are symlinks) are open in the workspace | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: Windows 11 23H2
Steps to Reproduce:
1. Open a workspace that has a gitignore'd folder expanded that is also a symlink
2. .gitignore graying out will not apply
3. If you close the folder and re-open the workspace, graying out is applied
| bug,file-explorer | low | Critical |
2,476,090,707 | kubernetes | Default standard request headers for aggregation | > In the future (kube-apiserver 1.32+) we can consider making kube-apiserver also include the standard username/groups/extra/uid headers the aggregator uses (which are not configurable) in the configmap it publishes containing auth config, so that aggregation works even if kube-apiserver only set non-standard requestheader flag options
_Originally posted by @liggitt in https://github.com/kubernetes/kubernetes/pull/115834#discussion_r1595742906_
Note: the earliest is now v1.33 because #115834 will not merge until v1.32.
| sig/auth,triage/accepted | low | Minor |
2,476,097,819 | godot | Screen-space effect upscaling doesn't work properly (SSAO/GI) | ### Tested versions
From 4.0
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 (NVIDIA; 32.0.15.5599) - Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz (16 Threads)
### Issue description
Some post-processing effects don't upscale properly when rendered at half resolution.
For global illumination (voxel GI/SDFGI) it looks like upscaling doesn't work at all and results in low resolution looking image.

For SSAO there are pixelated artifacts that result in weirdly doubled offset pixels.

### Steps to reproduce
Add WorldEnvironment node to the scene
Enable SSAO and SDFGI
### Minimal reproduction project (MRP)
[screen-effect-upscale-bug.zip](https://github.com/user-attachments/files/16679060/screen-effect-upscale-bug.zip)
| bug,topic:rendering,confirmed,topic:3d | low | Critical |
2,476,131,700 | stable-diffusion-webui | [Bug]: API and Render Progress | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
When using the API with multiple clients—potentially hundreds—all having direct access, it becomes impossible to identify which progress updates correspond to which jobs. This makes it difficult to accurately display each user's position in the queue and to show the correct render progress for their images. The current API's progress reporting is incomplete for this use case.
I suggest adding a queue endpoint to the API that shows pending or waiting jobs. Additionally, the API should allow for the inclusion of a custom field (e.g., client_job_id_info) where the client can add random text. This would enable each client to attach a custom ID or session identifier, making it easier to track if their job is currently processing.
### Steps to reproduce the problem
Kick off 2 jobs from the same client at same time, how do I know which was picked up first and what is processing
### What should have happened?
See above
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
N/A
### Console logs
```Shell
N/A
```
### Additional information
_No response_ | bug-report | low | Critical |
2,476,164,310 | react | [DevTools Bug]: Inconsistent behavior; React dev tools does not recognize a react website; 'service worker(inactive)' | ### Website or app
https://healthbridge-silk.vercel.app/
### Repro steps
I have a two profiles on my chrome browser one personal other work related.
1. Installed dev tool extension about a few months back on work profile.
2. Installed the same extension on my personal profile.
3. Works flawlessly on my work profile but in my personal profile has error "service worker(inactive)"
On my personal profile it does not recognize any react websites and on clicking the extension gives "Page does not seem to be using react". Works flawlessly on my work profile.
Image from my personal account:

Image from my work account:

EDIT: I checked the extension errors in my personal account and here is a screen shot of it:
There are no stack traces for any of the errors.

### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
_No response_
### Error component stack (automated)
_No response_
### GitHub query string (automated)
_No response_ | Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,476,168,103 | flutter | [webview_flutter] Support background color on macOS | Split from https://github.com/flutter/flutter/issues/41725 as the initial implementation as landed/published does not support this feature.
`NSView` doesn't have the same color and opacity controls that UIView does, so setting the background color will need alternate implementation (likely involving making the webview layer-backed). | platform-mac,p: webview,package,P2,team-macos,triaged-macos | low | Minor |
2,476,168,172 | flutter | [webview_flutter] Support getting and setting scroll position on macOS | Split from https://github.com/flutter/flutter/issues/41725 as the initial implementation as landed/published does not support this feature.
WKWebView doesn't expose a scrollView on macOS, so the code for getting and setting scroll position doesn't work. Currently they throw UnimplementedError; we should look into implementing those with JS calls on macOS as a polyfill instead for consistency of API support. | platform-mac,p: webview,package,P2,team-macos,triaged-macos | low | Critical |
2,476,169,712 | ollama | Model Library: Ability to update model manifest via editor | # TLDR
Direct updates of small, text-based files like parameters, template, license and system message **within the Ollama library** to avoid the time-consuming and bandwidth-heavy process of pulling, updating and pushing all tags of a model. This would simplify updates without affecting already existing possibilities with ollama-push.
# Details
Currently, when changes to a manifest/modelfile need to be made (e.g. template or parameters), all `N` tags of a model need to be pulled, updated and then pushed `N` times respectively.
This can be quite time consuming (especially for 70B+ models and with many quantizations) as well as somewhat wasteful regarding bandwidth of the Ollama library - especially since things like the template are often shared between all tags, so just one single object would need to be updated.
Perhaps updating purely text-based, small files/objects like parameters, license, system message and template could be done directly in the Ollama library, like the model readme can (by the author). That would leave the fine-grained control via ollama-push untouched and not complicate anything there.
That's just what came to mind first though, there might be other sensible approaches as well.
Since the library is not open source as far as I'm aware, I can't help with contributing there myself, so here's a feature request to hopefully keep an eye on this topic. Many thanks! | feature request | low | Minor |
2,476,169,948 | flutter | [webview_flutter] Support scroll listener on macOS | Split from https://github.com/flutter/flutter/issues/41725 as the initial implementation as landed/published does not support this feature.
See also https://github.com/flutter/flutter/issues/153774 about scrolling itself.
The scroll position listener doesn't work on macOS. A JS polyfill would be more dangerous than for getting and setting the scroll position, since a listener would require injecting a script into every page on load, and could thus affect page behavior. | platform-mac,p: webview,package,P2,team-macos,triaged-macos | low | Minor |
2,476,194,038 | godot | Physics Interpolation affects Camera2D position smoothing on higher framerates | ### Tested versions
- Reproducible in 4.2.stable, 4.3.stable
### System information
ArchLinux 6.10.2-arch1-1.1-g14 - KDE Plasma 6.1.4 - Godot 4.3.stable.arch_linux
### Issue description
When physics interpolation is activated, the camera smoothing actually goes faster than when it's deactivated for the same speed.
### Steps to reproduce
1. Have a monitor with a refresh rate higher than 60Hz
2. Set up a 2D platformer, with a Camera2D that has Position Smoothing turned on
3. Test the camera's physics
4. Go to settings, turn on physics interpolation
5. Test again
### Minimal reproduction project (MRP)
[test.zip](https://github.com/user-attachments/files/16679694/test.zip) | bug,confirmed,topic:physics,topic:2d | low | Major |
2,476,194,172 | next.js | MQTT.JS example: the useEffect return cause client closure | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
the useEffect return cause client closure and so the mqtt client is not usable in the other pages.
return () => {
if (client) {
topicHandlers.forEach((th) => {
client.unsubscribe(th.topic);
});
client.end();
}
};
```
### Which example does this report relate to?
mqtt.js
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
_No response_
### Describe the Bug
the useEffect return cause client closure and so the mqtt client is not usable in the other pages.
### Expected Behavior
the useEffect return cause client closure and so the mqtt client is not usable in the other pages.
### To Reproduce
Use the example | examples | low | Critical |
2,476,216,778 | flutter | When discovering iOS devices: `ProcessException: Resource temporarily unavailable` | As of today (8/20), on 3.24.0: 221 reports from 102 unique clients (top 20ish crasher)
From flutter commands (looking at first 20 reports): `flutter daemon`
```
ProcessException: Resource temporarily unavailable Command: /usr/bin/xcrun simctl list devices booted iOS --json
at _ProcessImpl._start(process_patch.dart:402)
at Process.start(process_patch.dart:38)
at _runNonInteractiveProcess(process_patch.dart:579)
at Process.run(process_patch.dart:49)
at LocalProcessManager.run(local_process_manager.dart:72)
at ErrorHandlingProcessManager.run.<anonymous closure>(error_handling_io.dart:669)
at _run(error_handling_io.dart:564)
at ErrorHandlingProcessManager.run(error_handling_io.dart:668)
at _DefaultProcessUtils.run(process.dart:312)
at SimControl._listBootedDevices(simulators.dart:162)
at SimControl.getConnectedDevices(simulators.dart:187)
at IOSSimulatorUtils.getAttachedDevices(simulators.dart:75)
at IOSSimulators.pollingGetDevices(simulators.dart:49)
at PollingDeviceDiscovery._initTimer.<anonymous closure>(device.dart:498)
at _rootRun(zone.dart:1391)
at _CustomZone.run(zone.dart:1301)
at _CustomZone.runGuarded(zone.dart:1209)
at _CustomZone.bindCallbackGuarded.<anonymous closure>(zone.dart:1249)
at _rootRun(zone.dart:1399)
at _CustomZone.run(zone.dart:1301)
at _CustomZone.bindCallback.<anonymous closure>(zone.dart:1233)
at Timer._createTimer.<anonymous closure>(timer_patch.dart:18)
at _Timer._runTimers(timer_impl.dart:398)
at _Timer._handleMessage(timer_impl.dart:429)
at _RawReceivePort._handleMessage(isolate_patch.dart:184)
``` | c: crash,good first issue,P2,team-tool,triaged-tool | low | Critical |
2,476,231,329 | rust | Overly conservative async capture analysis when values are borrowed | There's probably a bug filed already but [this code](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=41fec3d02ac841085788c4e058765e7b) does not compile, though it really ought to:
```rust
use std::sync::Mutex;
async fn foo(m: &Mutex<u32>) {
let lock = m.lock().unwrap();
if condition(&lock) {
drop(lock);
return bar().await;
}
drop(lock);
return bar().await;
}
async fn bar() { }
fn is_send<T: Send>(t: T) { }
fn condition(x: &u32) -> bool {
false
}
fn main() {
let m = Mutex::new(22);
is_send(foo(&m));
}
```
I believe the problem is specific to the `&lock`, which causes the capture analysis to get nervous -- even though `lock` is dropped. This is distilled from real-world code within Amazon.
Error you get today:
```
error: future cannot be sent between threads safely
--> src/main.rs:26:13
|
26 | is_send(foo(&m));
| ^^^^^^^ future returned by `foo` is not `Send`
|
= help: within `impl Future<Output = ()>`, the trait `Send` is not implemented for `MutexGuard<'_, u32>`, which is required by `impl Future<Output = ()>: Send`
note: future is not `Send` as this value is used across an await
--> src/main.rs:9:22
|
6 | let lock = m.lock().unwrap();
| ---- has type `MutexGuard<'_, u32>` which is not `Send`
...
9 | return bar().await;
| ^^^^^ await occurs here, with `lock` maybe used later
note: required by a bound in `is_send`
--> src/main.rs:18:15
|
18 | fn is_send<T: Send>(t: T) { }
| ^^^^ required by this bound in `is_send`
```
I'm nominated for async just to get some eyes on this. I'd be interested to discuss fixes, I have a few thoughts, though I'd have to look at the code too. | WG-async,I-async-nominated | low | Critical |
2,476,359,622 | vscode | [Accessibility] Support audio cue for commented range | Type: <b>Feature Request</b>
CC @meganrogge
I (and other blind programmers in a professional setting) do code-review a lot and I encounter the need where I have to navigate the source code and commented ranges.
Sighted people can visually see from where to where the comments are applied in their current editor. Following the successful implementation of other audio cues, such as error, warning, etc., could you please consider supporting commented ranges via audio cue?
## Some Considerations
1. This audio cue needs to be designed in the least distruptive mannaer because sometimes users may mix multiple audio cues simultaneously. White noise might be a possible option.
1. This audio cue should be disabled by default to avoid any confusion against the existing audio cues
1. Info on this audio cue could be included in the Accessible Help for the comment thread for its discoverability
VS Code version: Code - Insiders 1.93.0-insider (e2b54301a5745870f6b95d81c91fb3e9557d4f08, 2024-08-20T08:04:15.567Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<!-- generated by issue reporter --> | accessibility,under-discussion,comments | low | Critical |
2,476,383,540 | godot | RootMotionTrack randomly modifying animated model's root bone pose in Editor | ### Tested versions
Godot 4.3, 4.2.2, 4.2.1
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated GeForce GTX 1060 6GB - Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz (12 Threads)
### Issue description
I have a 3D character model that uses Root Motion in its animation. At random, the model will be offset by the Root Motion and end up floating in the air or standing several dozen feet away from its actual origin point.
To fix it, I have to:
-Go to the Animation Player
-Unassign the Root Motion Track
-Play animations where the model is at the correct point of origin- sometimes going to a different animation and then back again
-Then setting the Root Motion Track again
Information:
-The character model is using a saved Animation Library, derived from its own Animations
-The character uses several Animation Trees that are linked to its Animation Player. Only one is Active at a time.
-Each Animation Tree has the same Root Motion Track selected, matching the Animation Player
-Toggling "Show Rest Only" on the Skeleton3D works as it should, and turning it off returns the character to the incorrect position
-Neither playing any given animation nor any amount of scripted movement has an effect on the offset
-Animations played through the Animation Player have no effect on the offset
-The issue never triggers in-game, only in the editor. The issue remains in effect when going in-game
### Steps to reproduce
I can't determine what the circumstances under which it keeps reoccurring, but it can be achieved manually:
-Import an animated model with Root Motion in the animations
-Leave the Root Motion Track empty
-Play an animation that includes Root Motion
-Assign the Root Motion Track when the model is moved away from its origin point in the animation
But the issue crops up randomly. I haven't been able to catch when it happens, so I can't figure out why.
### Minimal reproduction project (MRP)
"If the reproduction steps are not project dependent (e.g. the bug is visible in a brand new project), you can write "N/A" in the field."
N/A | bug,needs testing,topic:animation,topic:3d | low | Critical |
2,476,410,072 | ollama | Microsoft Phi-3.5 models | - [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)
- [Phi-3.5-MoE-instruct](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct)
- [Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) | model request | high | Critical |
2,476,455,298 | pytorch | vmap tests aren't erroring on fallback usage | ### 🐛 Describe the bug
When writing tests, I seemed to notice that even if the test triggered the fallback (and i didn't add any annotations), it still wouldn't error.
cc: @zou3519 @guilhermeleobas
### Versions
N/A
cc @zou3519 @samdow @kshitij12345 @janeyx99 | triaged,module: vmap,module: functorch | low | Critical |
2,476,492,698 | vscode | Transparent image grid isn't intuitive | I created this tiny file:

When I zoom in and out it looks like there are dark squares on the image since the grid scales with zoom:

It's much more clear if the transparency grid doesn't scale like this:

| bug,help wanted,image-preview | low | Minor |
2,476,496,069 | go | x/tools/gopls: make docs discoverable | A Google search for "[gopls features](https://github.com/golang/tools/blob/master/gopls/doc/features/README.md)" or "[Gopls: Code transformation features](https://github.com/golang/tools/blob/master/gopls/doc/features/transformation.md)" still after several weeks doesn't turn up gopls' lovingly rewritten documentation. We should fix that.
Related: @hyangah will add a feature to VS Code to make the "Browse gopls feature documentation" command more prominent.
- https://github.com/golang/vscode-go/issues/3498
Also, @hyangah wonders whether the gopls markdown documentation should be canonically hosted not at GitHub but in pkg.go.dev. (I vaguely recall @rsc saying there was some reason we should not rely on pkgsite's markdown feature, but I forget what it was.) @hyangah suggested alternatively hosting them at go.dev/something.
| Documentation,gopls,Tools | low | Minor |
2,476,541,120 | rust | Tracking Issue for `lazy_get` | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(lazy_get)]`
This is a tracking issue for `LazyCell/Lock::get[_mut]()`, allowing you to extract a reference from a `Lazy` only if it is initialized, approved in ACP https://github.com/rust-lang/libs-team/issues/429.
<!--
Include a short description of the feature.
-->
### Public API
```rust
impl<T, F> core::cell::LazyCell<T, F> {
pub fn get(this: &Self) -> Option<&T>;
pub fn get_mut(this: &mut Self) -> Option<&mut T>;
pub fn force_mut(this: &mut Self) -> &mut T;
}
impl<T, F> std::sync::LazyLock<T, F> {
pub fn get(this: &Self) -> Option<&T>;
pub fn get_mut(this: &mut Self) -> Option<&mut T>;
pub fn force_mut(this: &mut Self) -> &mut T;
}
```
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] ACP: https://github.com/rust-lang/libs-team/issues/429
- Implementation:
- [ ] https://github.com/rust-lang/rust/pull/129334
- [ ] https://github.com/rust-lang/rust/pull/130476
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,476,555,817 | pytorch | FSDP activation CPU offload memory not cleaned up | ### 🐛 Describe the bug
When doing cpu offload for activations in FSDP, I expect that memory to be cleaned up after each backwards pass since the activations are no longer used. However, I'm seeing CPU memory remain high after training when offloading activations compared to no activation offload where the CPU memory is about the same before and after training.
On a 1.8 TB RAM node, I'm seeing these numbers:
No CPU offload
- 3% before training batch
- 3.8% after training batch
CPU offload
- 3.2% before training batch
- 6.3% after training batch
```python
from typing import Union
import logging
import psutil
import gc
import torch
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import apply_activation_checkpointing, offload_wrapper
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import ShardingStrategy
from torch.distributed.fsdp import CPUOffload
from composer.utils import dist, get_device
log = logging.getLogger(__name__)
PYTHON_LOG_LEVEL = 'DEBUG'
def _initialize_dist_with_barrier(dist_timeout: Union[int, float]):
"""Initialize distributed and test setup with a barrier.
Args:
dist_timeout (Union[int, float]): Timeout for initializing the process group
"""
log.debug('Initializing dist with device...')
dist.initialize_dist(get_device(None), timeout=dist_timeout)
log.debug('Testing barrier with device...')
dist.barrier()
log.debug('Barrier test passed with device.')
class TinyModel(torch.nn.Module):
def __init__(self):
super(TinyModel, self).__init__()
self.activation = torch.nn.ReLU()
self.linear1 = torch.nn.Linear(10000, 20000)
self.linear2 = torch.nn.Linear(20000, 20000)
self.linear3 = torch.nn.Linear(20000, 20000)
self.linear4 = torch.nn.Linear(20000, 20000)
self.linear5 = torch.nn.Linear(20000, 10)
self.softmax = torch.nn.Softmax()
def forward(self, x):
x = self.linear1(x)
x = self.activation(x)
x = self.linear2(x)
x = self.activation(x)
x = self.linear3(x)
x = self.activation(x)
x = self.linear4(x)
x = self.activation(x)
x = self.linear5(x)
x = self.softmax(x)
return x
logging.basicConfig(
# Example of format string
# 2022-06-29 11:22:26,152: rank0[822018][MainThread]: INFO: Message here
format=
f'%(asctime)s: rank{dist.get_global_rank()}[%(process)d][%(threadName)s]: %(levelname)s: %(name)s: %(message)s',
)
logging.getLogger('llmfoundry').setLevel(
PYTHON_LOG_LEVEL.upper(),
) # Foundry module
logging.getLogger(__name__).setLevel(
PYTHON_LOG_LEVEL.upper(),
) # Train script
_initialize_dist_with_barrier(dist_timeout=3600)
num_samples = 1000
X = torch.randn(num_samples, 10000)
y = torch.randint(0, 10, (num_samples,))
# Create a TensorDataset and DataLoader
dataset = torch.utils.data.TensorDataset(X, y)
data_loader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)
model = TinyModel().cuda()
# apply_activation_checkpointing(model, offload_wrapper)
model = FSDP(model, sharding_strategy=ShardingStrategy.FULL_SHARD, cpu_offload=CPUOffload(offload_params=True))
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
print('Number of parameters: ')
print(sum(p.numel() for p in model.parameters() if p.requires_grad))
print(psutil.virtual_memory())
# Training loop
num_epochs = 1
for epoch in range(num_epochs):
for batch_idx, (inputs, targets) in enumerate(data_loader):
# Move inputs and targets to the device
inputs = inputs.cuda()
targets = targets.cuda()
# Zero the parameter gradients
optimizer.zero_grad()
# Forward pass
outputs = model(inputs)
# Compute loss
loss = criterion(outputs, targets)
# Backward pass and optimize
loss.backward()
optimizer.step()
if batch_idx % 10 == 0:
print(f'Epoch [{epoch+1}/{num_epochs}], Batch [{batch_idx+1}/{len(data_loader)}], Loss: {loss.item():.4f}')
print(psutil.virtual_memory())
torch.cuda.empty_cache()
gc.collect()
print(psutil.virtual_memory())
break
print(psutil.virtual_memory())
del model
del inputs
del targets
del optimizer
del outputs
del loss
torch.cuda.empty_cache()
gc.collect()
print(psutil.virtual_memory())
```
### Versions
Collecting environment information...
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8480+
Stepping: 8
CPU MHz: 2000.000
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 5.3 MiB
L1i cache: 3.5 MiB
L2 cache: 224 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.18.1
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.3.1+cu121
[pip3] torch-optimizer==0.3.0
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.18.1+cu121
[pip3] triton==2.3.1
[conda] Could not collect
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,module: memory usage,triaged | low | Critical |
2,476,556,068 | flutter | [web] support semantic-only rendering mode | Not all browser environments fully support WebGL. For example, [Clarity](https://clarity.microsoft.com/) (described [here](https://github.com/flutter/flutter/issues/145954#issuecomment-2299704046)) can run web apps but WebGL content is rendered as black pixels. Similar limitations apply to some web/ads crawlers.
It should be relatively easy for Flutter web to have a mode that makes rendering into WebGL a noop. The semantics tree can still be fully rendered, and we already have a mode where we make the semantics tree visible for debugging. This mode could be sufficient for use-cases such as heat maps and indexing, while reducing the requirement that the browser provide a full-featured WebGL implementation.
| engine,platform-web,c: rendering,P2,team-web,triaged-web | low | Critical |
2,476,563,396 | ui | [bug]: AreaChart overflow to negative for curving. | ### Describe the bug
The AreaChart component overflowing to negative for make a 'curve'. I do not think this is intended behavior.
### Affected component/components
AreaChart
### How to reproduce
1. Add default AreaChart component
2. Set the charData's last 10-15 to zero (both mobile & desktop)
3. Open the chart
4. Change the range of it in select to "Last 7 days" and back to "Last 3 Months"

### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
-no logs-
```
### System Info
```bash
Firefox 129.0.1
Ubuntu 22.04
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,476,573,494 | godot | Can't use code_completion_prefixes in CodeEdit | ### Tested versions
4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1060 6GB (NVIDIA; 32.0.15.6070) - AMD Ryzen 5 2600X Six-Core Processor (12 Threads)
### Issue description
Unless I'm misunderstanding something, Trying to press the Add Element button, or resizing the array in any way inside the inspector causes this error:
`Code completion prefix cannot be empty.`
You can also not append to the array in code, however you can set it directly.
### Steps to reproduce
Just add a CodeEdit and try adding a prefix either by resizing or by pressing Add Element, it does not work.

As well as in code by appending to it and then printing to it, doesn't seem to do anything.
### Minimal reproduction project (MRP)
N/A | bug,topic:gui | low | Critical |
2,476,581,594 | flutter | Update iOS add-to-app docs for SwiftPM | null | platform-ios,a: existing-apps,P2,team-ios,triaged-ios | low | Minor |
2,476,582,274 | TypeScript | Conditional (using extends) parametrised type inference seems to be inconsistent | ### 🔎 Search Terms
`repo:microsoft/inconsistent typing with extends and parametrised types`
`repo:microsoft/TypeScript inconsistent typing with extends`
`repo:microsoft/TypeScript inconsistent conditional typing`
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed all the FAQ for entries.
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/MYGwhgzhAEDCIFcB2BrAPAOWgUwB4BdskATGCfAJwEskBzAPmgG8BfAKFEhgGUAHbYFTAh4ydFjyESZSjQY4CRUnESpMjJm2jQI-QcIBiVbCGLQAvNADkAMzEBPG8dNWA3G3Zt89-tD4ChEVUUAFFFaVFUCz89QMj0MCR7RkklGHi0ROToAH5oSgRsaAAuaBthCGxXaAB6GvyKQoA6Lx8i+LCpUn99ILFojKyU8OUeuODMpMY8gqLS8pBK6rqG5rY2PF4Aewp8fLa-LYBbbAyAFQUumCskHaPhK2gAH2tdAIeAGmgJEZlqOkYlguqWk1luFHuICsWlyKjE6hhpWBv1esQeMLyY2EGQw9ER0CQ2AAbtgKO5Wr4sSBOmkMHdhNErG9eo8QcobvSobDZiUyhUqrV6gtKi1vL46RDhDTpFTGeDIayUUy0VyZo05nzFgKVsLsC02CsAKqVaBbGz7XwAC1JRXwW2gACMisBjrwwPgqA6QEUAO5UfCW6C0IikqjAaBuihgI4QFqbHZ7MVFAAKAEY0MirmDOY8Xsr3lDAcwYaAxKVuMdThMzl9bFstlZ6O4WOSk9A0-KGZY02gOZLC8t6m2O5zokxoKXUKUMgAiGz1mdN6DsYep5mBaI9-MspcrVfrrvMCfBcsq2fzraL6org5pqnS0gSyGbtcqy5pdupzsgbnq3m6wdVmwCkU1TJ9hAAES2bAIDpfAHwgWVuy-Uc2RgNMDx-NVCn-flAN1EDP3vX5wJAOkfQMfkX17TDG3fUEt2-OjsI1ADBU1E0gA
### 💻 Code
```ts
class Clunk<N extends string> {}
class SpecialClunk<N extends string> extends Clunk<N> {
specialField = 'funkyfield';
}
type SpecialClunkExtendsClunk = SpecialClunk<any> extends Clunk<any> ? true : false; // true.
type ClunkExtendsSpecialClunk = Clunk<any> extends SpecialClunk<any> ? true : false; // true.
export type SomeClunk<T extends 'normal' | 'special', N extends string> = T extends 'normal'
? Clunk<N>
: T extends 'special'
? SpecialClunk<N>
: never;
type SpecialExtendsNormal = 'special' extends 'normal' ? true : false; // false.
type NormalExtendsSpecial = 'normal' extends 'special' ? true : false; // false.
// Use of type here to be compatible with generic params.
export type P1<T extends 'normal' | 'special'> = {
clunk: SomeClunk<T, 'foo'>;
};
type P1normal = P1<'normal'>; // type P1normal = { clunk: Clunk<"foo">; }
type P1special = P1<'special'>; // type P1special = { clunk: SpecialClunk<"foo">; }
type P1NormalDoesNotExtendsSpecial = P1normal extends P1special ? true : false; // false
type P1SpecialExtendsNormal = P1special extends P1normal ? true : false; // true
// Below is false, but expected to be true, like P1SpecialExtendsNormal *** BUG?
type P1SpecialExtendsNormalNowFalse = P1<'special'> extends P1<'normal'> ? true : false; // false
```
### 🙁 Actual behavior
`P1SpecialExtendsNormal` is `true`
`P1SpecialExtendsNormalNowFalse` is `false`
### 🙂 Expected behavior
`P1SpecialExtendsNormal` is `true`
`P1SpecialExtendsNormalNowFalse` is `true`
### Additional information about the issue
I can't tell if this might be related to https://github.com/microsoft/TypeScript/issues/44945
| Needs Investigation | low | Critical |
2,476,584,203 | go | runtime:mayMoreStackPreempt: TestRuntimeLockMetricsAndProfile/runtime.lock/sample-1 failures | ```
#!watchflakes
default <- pkg == "runtime:mayMoreStackPreempt" && test == "TestRuntimeLockMetricsAndProfile/runtime.lock/sample-1"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8739063751660286673)):
=== RUN TestRuntimeLockMetricsAndProfile/runtime.lock/sample-1
metrics_test.go:1065: lock contention growth in runtime/pprof's view (0.036582s)
metrics_test.go:1066: lock contention growth in runtime/metrics' view (0.037450s)
metrics_test.go:1104: stack [runtime.unlock runtime_test.TestRuntimeLockMetricsAndProfile.func5.1 runtime_test.(*contentionWorker).run] has samples totaling n=199 value=35811485
metrics_test.go:1192: mutex profile reported contention count different from the known true count (199 != 200)
--- FAIL: TestRuntimeLockMetricsAndProfile/runtime.lock/sample-1 (0.06s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,476,603,960 | godot | Editor freeze when right clicking on second display with Compatibility rendering mode | ### Tested versions
- Originally discovered in v4.3.stable.official [77dcf97d8]
- Earliest reproducible version I could find: v4.0.beta9.official [e780dc332]. Although I didn't check every version in between, it was reproducible in every version I did check.
- Although the bug is not fully reproducible in these versions, opening a newer project configured with GL Compatibility in v4.0.beta2.official [f8745f2f7] or newer also can cause the bug.
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6081) - AMD Ryzen 9 5950X 16-Core Processor (32 Threads)
### Issue description
If my Godot project's rendering mode is set to Compatibility, the editor freezes if a right click menu is opened on a display other than my primary.
This doesn't occur on Forward+ or mobile, and by my investigation it appears this bug has existed as long as the Compatibility rendering mode has.
### Steps to reproduce
1. Launch Godot
2. Create a new project
3. Set the rendering mode to Compatibility
4. Drag the editor window into a monitor other than your primary
5. Open a right click menu anywhere on that other display
6. Godot should freeze
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:editor,topic:porting,needs testing | low | Critical |
2,476,609,904 | tauri | [bug] Error 71 (Protocol error) dispatching to Wayland display. | ### Describe the bug
Application won't start anymore.
Error:
`Gdk-Message: 23:55:52.007: Error 71 (Protocol error) dispatching to Wayland display.`
### Reproduction
Execute `yarn tauri dev` under Linux and try to start your Application
### Expected behavior
Normal starting without errors
### Full `tauri info` output
```text
> tauri info
[✔] Environment
- OS: Linux Rolling Release X64
✔ webkit2gtk-4.0: 2.44.3
✔ rsvg2: 2.58.3
✔ rustc: 1.80.0 (051478957 2024-07-21)
✔ cargo: 1.80.0 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 22.6.0
- yarn: 1.22.22
- npm: 10.8.2
[-] Packages
- tauri [RUST]: 1.7.1
- tauri-build [RUST]: 1.5.3
- wry [RUST]: 0.24.10
- tao [RUST]: 0.16.9
- @tauri-apps/api [NPM]: 1.6.0
- @tauri-apps/cli [NPM]: 1.6.0
[-] App
- build-type: bundle
- CSP: unset
- distDir: ../dist
- devPath: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
I haven't changed anything in the project since the last time and everything still worked. | type: bug,status: upstream,platform: Linux | medium | Critical |
2,476,613,420 | flutter | Migrate release engine binary to a new bucket | Related to internal b/340303879
Currently, when we create a new release candidate branch, fresh engine builds are scheduled via the `$OS engine_release_builder` targets. These will upload their binaries to the same GCS location as the main branch builds, thus over-writing pre-existing binaries. This causes correctness issues, and also makes whether or not you are downloading securely built binaries dependent on WHEN you downloaded them (or cached them).
We should:
- [ ] Update the release recipes to upload to both the original GCS namespace AND a new secure release namespace
- [ ] Validate the internal roll can consume these
- [ ] Update the release process to check in an engine.realm file to the flutter/flutter branch so that open source users can consume the binaries from the new secure namespace
- [ ] Update the release recipes to stop uploading to the original GCS namespace | team-infra,P1,infra: security,triaged-infra | medium | Minor |
2,476,696,303 | TypeScript | Design Meeting Notes, 8/20/2024 |
# Reconsidering a flag to rewrite `.ts` to `.js` in emit in light of TS support in Node.js
https://github.com/microsoft/TypeScript/issues/59597#issuecomment-2287466184
* Traditionally, we've said we will never rewrite any module specifier.
* What's changed? `--experimental-transform-types` and `--experimental-strip-types` are now in Node.js.
* Here, you must reference your files with an explicit `.ts`-like extension.
* Today, we have `allowImportingTsExtensions` to allow this, but you must have `noEmit` turned on.
* People want a mode where they can emit to `.js` but still reference `.ts` files.
* Why?
* Maybe you were using these transform types for development, but need to emit to JS for production.
* Maybe you need to publish to npm (can't load `.ts` from `node_modules`).
* So we are reconsidering TS never rewriting paths.
* What are the rules for when TypeScript can do this?
* Paths must begin with a `.` or `..`.
* The path cannot be a declaration file path (i.e. one cannot explicitly reference a `.d.ts` file).
* The path must be a `.ts`, `.tsx`, `.cts`, or `.mts` file.
* `.tsx`
* We will rewrite `.tsx` to `.js` if `--jsx` is `react`, `react-jsx`, or `react-jsxdev`.
* We will rewrite `.tsx` to `.jsx` if `--jsx` is `react-native`.
* Error checking.
* We will do a naive transform, but we will need to see if something will improperly be transformed.
* For example, `import foo = require("./foo.ts");` will be transformed to `import foo = require("./foo.js");`, which is incorrect if the original file was `foo.ts.ts`.
* Can we do something about incorrect`import` and `export` subpaths?
* We don't rewrite subpaths, but people can write `exports` and `imports` in `package.json` that resolve to `.ts` files. Won't run on publish.
* Mmmmaybe, we haven't settled on anything on this yet.
* Is the request something like "downleveling"?
* Kind of. It's for older runtimes, it's for `node_modules`.
* Are there other less-obvious features that need to be downleveled?
* Don't want to over-generalize on this problem space.
* Are there other
* What about `--outDir`? Shouldn't we rewrite paths relative to `outDir`?
* If you rewrite `../../other-project/src/foo.ts`, don't you want to rewrite this to `../../other-project/dist/foo.js`?
* Uh.
* `moduleSuffixes`?
* Can ban this one...probably.
* Are we fighting the tide with this stuff? Should we just say the runtime should support `.ts` everywhere and that's what people use?
* Maybe long-term? There are lots of issues with that, but this is nearer-term.
* What do we do with `import type`?
* Presumably, we would rewrite these to `.js` as well, right?
* We can leave them as-is technically.
* We should test that.
* Lots of hypotheticals around using Deno and Bun and then going broader across runtimes.
* Is that realistic? dnt probably does this better than we can.
* Are we going to error if users write `.js` imports instead of `.ts`? It's going to error and users are not going to know why.
* We probably should look at if JSR does the same transforms that we're looking to do.
* This is experimental in Node.js. Should this be experimental as well?
* Supports other tools/use-cases as well.
* Let's talk about dynamic `import(...)` calls.
* Some of these are statically analyzable, but some are not.
* e.g. `import(someVariable);`
* One option is a runtime shim like `__fixExtension`.
* Another option is to just not rewrite these at all - means you can only have static imports, and users have to write their own shimming logic before calling `import()` on totally dynamic contents.
* Leaning a little towards just making this work with a helper.
* tslib?
* Yes, but you might want this fixed (file extensions might not be recognized).
* Well users can lock on a tslib.
* This is in theory simple enough - so simple it could be a standalone tool. Which is an argument both for and against doing it.
* People have written them, but they don't get the error checking.
* Feels like we need answers for
* `moduleSuffixes`
* `outDir`
* If people want to use these modes, they usually don't want to use `tsc` in the tight loop anyway, right? Why do people want to write `tsc`? Some other tool can do this rewriting.
* Declaration files
* How do you avoid the `../../src/foo.ts` vs. `../../dist/foo.js` problem?
* Is it a problem? Doesn't work today if you reference the input file.
* But that's the whole story - you're running off of input files. And the way people avoid thinking about this is `imports`/`exports` subpaths in `package.json`, workspaces, or path mappings.
* Probably needs custom conditions and non-relative paths.
* Though are there custom conditions that **don't** get triggered within `node_modules`? Because you don't want to resolve to the `.ts` files in `node_modules` (they'll fail at runtime!).
* There are a LOT of issues regarding deployment, project structuring, scaling, etc. that need to be solved. Needs to fundamentally be built into how Node.js itself supports TypeScript.
| Design Notes | low | Critical |
2,476,699,137 | terminal | Display more info for WinGet suggestions | > okay a minor thought: In 1.22 I added `Description`'s to `Command`s. If the package catalog gives us a description, we could totally stick that in the teaching tip 👀
>
> Similarly, IMO, we don't need `--source winget` in the `Name` of the Command. It's clearer to just say `winget install Foo` and have the rest of the args in the preview text (but that's just my opinion)
from https://github.com/microsoft/terminal/pull/17614#pullrequestreview-2248962767
### Implementation
Should be pretty straightforward to do, but we need to make a few changes to make this happen:
- [ ] Currently, `ControlCore` stores the suggestions as an `IVector<hstring>`. We'll need to change that to be able to hold the winget metadata (package description and simplified package name)
- [ ] `TerminalPage::_PopulateQuickFixMenu` needs to be updated to display the metadata
- [ ] `TerminalPage::_doHandleSuggestions` needs to be updated to display the metadata | Product-Terminal,Issue-Task,Needs-Tag-Fix,Area-Suggestions | low | Minor |
2,476,716,538 | flutter | In packages repo, swift-format should fall back to Xcode swift-format if not found on `PATH` | Xcode 16 now packages `swift-format`
```
$ xcrun --find swift-format
Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift-format
```
The `format` command should first check if `--swift-format-path` is passed in, then if `swift-format` is on the `PATH`, and _then_ fallback to the results of the `xcrun --find swift-format`. And if that returns nothing, then fail.
https://github.com/flutter/packages/blob/1ab1a712e5df333026c9103fffa82f014e731d9b/script/tool/lib/src/format_command.dart#L230-L239 | package,good first issue,P3,team-ios,triaged-ios | low | Minor |
2,476,724,692 | kubernetes | `applyconfigurations.NewTypeConverter`: Optimize implementation to avoid slow YAML parsing | ### What would you like to be added?
This function takes a compiled-in YAML string, then validates it (unmarshalling from YAML) and creates a Parser (again, unmarshalling from YAML).
Given the schema is builtin, validation seems unnecessary. Additionally, moving to JSON may be a more efficient, especially given this is all generated anyways.
### Why is this needed?
When under `-race` mode, this can take ~200ms to setup. Not a crazy amount of time, but it adds up a lot if you have unit tests using it throughout the codebase. Note it is under sync.Once, but still runs once-per-package for tests. | priority/backlog,sig/api-machinery,kind/feature,help wanted,triage/accepted | low | Major |
2,476,747,385 | go | x/image/vector: repeat Draw() -> nonsense | ### Go version
n/a
### Output of `go env` in your module/workspace:
```shell
n/a
```
### What did you do?
https://go.dev/play/p/VlCSG5OMtxF
(adapted from package example)
i was trying to draw to two different targets.
### What did you see happen?
n/a
### What did you expect to see?
no change with repeated Draws | NeedsInvestigation | low | Minor |
2,476,811,962 | godot | Fading between audio streams is not working in AudioStreamPlaylist | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22621 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6081) - AMD Ryzen 5 3600X 6-Core Processor (12 Threads)
### Issue description
I am using an AudioStreamPlayer with the `stream` set to an AudioStreamPlaylist, and that AudioStreamPlaylist set to have 2 different songs as streams. I am trying to have the AudioStreamPlaylist fade from the first song to the next, but no matter what I set the `fade_time` to, the first song stays at full volume till the very end. I have also set the `fade_time` in code, but this has not worked either.
### Steps to reproduce
If using the included MRP, simply run the included scene, and listen for a fade between the two tracks.
If not using the included MRP, do the following steps to reproduce:
Create a new scene with an AudioStreamPlayer as the root node. In the inspector, set the `stream` of that AudioStreamPlayer as a new AudioStreamPlaylist. In that AudioStreamPlaylist, add two streams, and set them as 2 different music files. Set the AudioStreamPlayer `autoplay` to true, and run the scene. Listen for the first song to fade into the second song. Optionally adjust the `fade_time` to a higher value to be better able to head a fade if it does occur.
### Minimal reproduction project (MRP)
[AudioStreamPlayer_test.zip](https://github.com/user-attachments/files/16684163/AudioStreamPlayer_test.zip)
| bug,topic:audio | low | Minor |
2,476,812,297 | pytorch | subprocess tensor creation gets stuck | ### 🐛 Describe the bug
When a tensor creation exceeding a certain size is executed in the main process, subsequent tensor creation within a worker process gets stuck. If either of these tensors is relatively small, the issue doesn't occur.
```
import torch
from torch.multiprocessing import Process
def worker():
print("start")
torch.zeros((20000, 3))
print("done")
if __name__ == "__main__":
torch.zeros((20000, 3))
num_gpus = torch.cuda.device_count()
processes = []
for gpu_id in range(num_gpus):
p = Process(target=worker, args=())
p.start()
processes.append(p)
for p in processes:
p.join()
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~22.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6354 CPU @ 3.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
Stepping: 6
CPU max MHz: 3600.0000
CPU min MHz: 800.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.7 MiB (36 instances)
L1i cache: 1.1 MiB (36 instances)
L2 cache: 45 MiB (36 instances)
L3 cache: 78 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] torch==2.4.0
[pip3] torch-fidelity==0.3.0
[pip3] torchmetrics==1.4.1
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.2 hfc3e2af_12 conda-forge
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 1.25.2 pypi_0 pypi
[conda] numpy-base 1.26.2 py310hb5e798b_0
cc @VitalyFedyunin | module: multiprocessing,triaged,module: deadlock | low | Critical |
2,476,833,011 | pytorch | Kron with gradient for sparse tensors | ### 🚀 The feature, motivation and pitch
I am using torch as part of some code that solves quantum systems in a differentiable way. An important part of this process is constructing large sparse matrices, often via the kronecker product of a number of matrices. In order for gradients to propagate from this large sparse matrix back to the parameters used in their construction, there needs to be a kron function that operates on sparse matrices and propagates gradients.
### Alternatives
It is unfeasible to first kron while dense and then convert to sparse as the full dense matrices do not fit in memory.
I can implement a kronecker function by calculating the new coordinates and values and creating a new sparse tensor with them, but in the creation of the new tensor the computation tree is broken:
```
def sparse_kron(input: torch.Tensor, other: torch.Tensor):
assert input.ndim == other.ndim
input_indices = input.indices()
other_indices = other.indices()
new_indices = []
for input_idx in input_indices.T:
for other_idx in other_indices.T:
new_indices.append(
[input_idx[i] * other.shape[i] + other_idx[i] for i in range(input.ndim)])
new_indices = torch.tensor(new_indices, requires_grad=False).T
new_values = torch.kron(input.values(), other.values())
if new_indices.ndim == 1:
new_indices = new_indices.reshape([input.ndim, 0])
new_shape = [n * m for n, m in zip(input.shape, other.shape)]
return torch.sparse_coo_tensor(new_indices, new_values, new_shape, dtype=input.dtype, device=input.device, is_coalesced=True)
```
fails to propagate the gradients.
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @ezyang @albanD @gqchen @soulitzer @Varal7 @xmfan | module: sparse,module: autograd,triaged | low | Critical |
2,476,850,027 | godot | can not "force_drag" a button ( or other control node ) if use "gui_input" to accept "MOUSE_BUTTON_LEFT" event | ### Tested versions
v4.2.2.stable.mono.official [15073afe3]
### System information
Godot v4.2.2.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 31.0.15.5176) - 13th Gen Intel(R) Core(TM) i5-13600KF (20 Threads)
### Issue description
a button with these code will drag perfect:
```gdscript
func _on_pressed() -> void:
force_drag(self,duplicate())
```
but a button with those code can not drag:
```gdscript
func _gui_input(event: InputEvent) -> void:
if event is InputEventMouseButton:
if event.button_index == MOUSE_BUTTON_LEFT and event.pressed:
force_drag(self,duplicate())
print("clicked:%s" % name)
accept_event()
```
**i have a control node, and i want to use "_gui_input" to drag, but it can not work.**
### Steps to reproduce
1. make a base button node
2. add a gdscript to button
3. listen "pressed" signal and "force_drag" it, work perfect
4. change to use "_gui_input", can not work
### Minimal reproduction project (MRP)
N/A | bug,topic:input,topic:gui | low | Minor |
2,476,891,842 | next.js | next/dynamic from external package causes hydration errors for combination of pages and app routers | ### Link to the code that reproduces this issue
https://github.com/adanperez/nextjs-dynamic-hydration-error
### To Reproduce
- `pnpm run build`
- `pnpm run start`
- Go to http://localhost:3000/ and refresh the page a couple times
You will see the following Hydration errors in the console.
```
framework-5e252d5045bb7a0e.js:9 Uncaught Error: Minified React error #418; visit https://reactjs.org/docs/error-decoder.html?invariant=418 for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
at ly (framework-5e252d5045bb7a0e.js:9:46791)
at framework-5e252d5045bb7a0e.js:9:99911
at oD (framework-5e252d5045bb7a0e.js:9:106131)
at oO (framework-5e252d5045bb7a0e.js:9:99079)
at framework-5e252d5045bb7a0e.js:9:98886
at oF (framework-5e252d5045bb7a0e.js:9:98893)
at oS (framework-5e252d5045bb7a0e.js:9:93932)
at x (framework-5e252d5045bb7a0e.js:33:1364)
at MessagePort.T (framework-5e252d5045bb7a0e.js:33:1894)
framework-5e252d5045bb7a0e.js:9 Uncaught Error: Minified React error #423; visit https://reactjs.org/docs/error-decoder.html?invariant=423 for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
at i (framework-5e252d5045bb7a0e.js:9:120721)
at oO (framework-5e252d5045bb7a0e.js:9:99019)
at framework-5e252d5045bb7a0e.js:9:98886
at oF (framework-5e252d5045bb7a0e.js:9:98893)
at ox (framework-5e252d5045bb7a0e.js:9:95645)
at oS (framework-5e252d5045bb7a0e.js:9:94200)
at x (framework-5e252d5045bb7a0e.js:33:1364)
at MessagePort.T (framework-5e252d5045bb7a0e.js:33:1894)
```
### Current vs. Expected behavior
### Current
Currently we get hydration errors when using a `next/dynamic` component from an external package.
### Expected
We do not expect to get hydration errors.
If you remove the `app` directory and rebuild and start the app, you will no longer see hydration errors. The combination of `pages` and `app` directories causes the hydration error.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.10.0
npm: 10.2.3
Yarn: N/A
pnpm: 9.4.0
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: 14.2.5
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Lazy Loading, Pages Router, Runtime
### Which stage(s) are affected? (Select all that apply)
next build (local), next start (local)
### Additional context
_No response_ | bug,Lazy Loading,Runtime,Pages Router | low | Critical |
2,476,922,693 | vscode | MAKR Underlined not display in minimap | Type: <b>Bug</b>
When using `MARK: - flag`, the minimap will not display underlined.
VS Code version: Code 1.92.1 (eaa41d57266683296de7d118f574d0c2652e1fc4, 2024-08-07T20:16:39.455Z)
OS version: Darwin arm64 23.4.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M3 Pro (12 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|4, 4, 4|
|Memory (System)|36.00GB (0.06GB free)|
|Process Argv|--crash-reporter-id 0a1a909e-15cd-409c-b1c9-c41ab4155147|
|Screen Reader|yes|
|VM|0%|
</details><details><summary>Extensions (71)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-sievehighlight|adz|1.0.7
togglehs|bbe|0.1.2
npm-intellisense|chr|1.4.5
path-intellisense|chr|2.9.0
doxdocgen|csc|1.4.0
vscode-markdownlint|Dav|0.55.0
vscode-eslint|dba|3.0.10
githistory|don|0.6.20
gitlens|eam|15.3.0
vscode-html-css|ecm|2.0.10
EditorConfig|Edi|0.16.4
LogFileHighlighter|emi|3.3.2
vscode-firefox-debug|fir|2.9.10
codespaces|Git|1.17.2
copilot|Git|1.223.0
copilot-chat|Git|0.18.2
vscode-github-actions|git|0.26.3
vscode-pull-request-github|Git|0.94.0
go|gol|0.42.0
applescript|idl|0.25.1
better-cpp-syntax|jef|1.27.1
intellij-idea-keybindings|k--|1.7.2
vscode-clangd|llv|0.1.29
autoconf|mae|0.2.0
string-manipulation|mar|0.5.7
Kotlin|mat|1.7.1
eps-preview|mkv|0.4.0
vscode-docker|ms-|1.29.2
vscode-language-pack-zh-hans|MS-|1.92.2024081409
vscode-kubernetes-tools|ms-|1.3.16
debugpy|ms-|2024.10.0
isort|ms-|2023.10.1
python|ms-|2024.12.3
vscode-pylance|ms-|2024.8.1
remote-containers|ms-|0.380.0
remote-ssh|ms-|0.113.1
remote-ssh-edit|ms-|0.86.0
remote-wsl|ms-|0.88.2
cmake-tools|ms-|1.18.44
cpptools|ms-|1.21.6
cpptools-extension-pack|ms-|1.3.0
hexeditor|ms-|1.10.0
makefile-tools|ms-|0.10.26
remote-explorer|ms-|0.4.3
vsliveshare|ms-|1.0.5936
gotools|neo|0.1.5
vetur|oct|0.37.3
proto|pet|0.0.4
vscode-yaml|red|1.15.0
LiveServer|rit|5.7.9
rust-analyzer|rus|0.3.2078
lualint|sat|0.0.5
markdown-preview-enhanced|shd|0.8.13
vscode-standard|sta|2.1.3
code-spell-checker|str|3.0.1
lua|sum|3.10.5
svelte-vscode|sve|108.6.0
language-stylus|sys|1.16.0
ayu|tea|1.0.5
jest-snapshot-language-support|tle|1.1.1
cmake|twx|0.0.17
errorlens|use|3.20.0
vscode-lldb|vad|1.10.0
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
vscode-conventional-commits|viv|1.25.0
vscode-gradle|vsc|3.16.4
vscode-icons|vsc|12.8.0
vscode-todo-highlight|way|1.0.5
markdown-all-in-one|yzh|3.6.2
vscode-proto3|zxh|0.5.5
(2 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonregdiag2:30936856
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
g316j359:31013175
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl2:31104044
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31111718
wkspc-ranged-c:31118571
```
</details>
<!-- generated by issue reporter --> | bug,editor-minimap | low | Critical |
2,476,927,379 | pytorch | [xpu] error occurs when running the command make triton | ### 🐛 Describe the bug
When running the command make triton , the following error is generated,
`export USE_XPU=1
make triton`
Building wheels for collected packages: triton
Building wheel for triton (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for triton (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [14 lines of output]
/tmp/pip-build-env-xcy8epq6/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py:268: UserWarning: Unknown distribution option: 'test_suite'
warnings.warn(msg)
running bdist_wheel
running build
running build_py
running build_ext
downloading and extracting https://tritonlang.blob.core.windows.net/llvm-builds/llvm-10dc3a8e-ubuntu-x64.tar.gz ...
error: HTTP Error 409: Public access is not permitted on this storage account.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for triton
Failed to build triton
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (triton)
make: *** [Makefile:55: triton] Error 1
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.39
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-39-generic-x86_64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] optree==0.12.1
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2024.2.1 pypi_0 pypi
[conda] mkl-static 2024.2.1 pypi_0 pypi
[conda] numpy 2.1.0 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,476,928,160 | create-react-app | About deprecated packages | ### Abort problem
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
The babel-preset-react-app inside create-react-app references the following four packages as dependencies, but all of them are deprecated. We recommend that you change them to the recommended packages.
- @babel/plugin-proposal-class-properties@7.18.6
- @babel/plugin-proposal-numeric-separator@7.18.6
- @babel/plugin-proposal-optional-chaining@7.21.0
- @babel/plugin-proposal-private-methods@7.18.6
### The solution I'm like
below for suggested packages to replace these packages with.
- @babel/plugin-transform-class-properties
- @babel/plugin-transform-numeric-separator instead.
- @babel/plugin-transform-optional-chaining instead.
- @babel/plugin-transform-private-methods instead.
I'm looking forward to your reply.
| issue: proposal,needs triage | low | Minor |
2,476,931,134 | yt-dlp | Can't update yt-dlp executable if it's on SMB network share. ERROR: Unable to move current version | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
I've got the yt-dlp Windows executable located on a SMB network share (Samba server). I'm not able to update the executable with the -U flag, since the file is locked for renaming while it's executed. This might be a peculiarity of the SMB protocol or especially the Samba server. `os.rename()` works when applied while the executable is not running, but it can not rename itself. This is a feature request to add a workaround for this scenario. E.g. use a helper executable/script to rename yt-dlp. This special way of updating could be triggered by an additional update flag.
For the moment I've just moved the yt-dlp executable to my local drive where the update process works flawlessly.
(Since this is quite specific to SMB or Samba I've categorized it as a Feature Request rather than Bug report)
I look forward to hearing your thoughts.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.01 from yt-dlp/yt-dlp [ffd7781d6] (win_exe)
[debug] Python 3.11.2 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1s 1 Nov 2022)
[debug] exe versions: ffmpeg 5.1.2-full_build-www.gyan.dev (setts), ffprobe 5.1.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, mutagen-1.47.0, requests-2.32.3, sqlite3-3.39.4, urllib3-2.2.2, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.08.06/SHA2-256SUMS
Current version: stable@2024.08.01 from yt-dlp/yt-dlp
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
Current Build Hash: c11f40c0ca95056e6ec427a89b73e9f585a13ce2b51581bb37226c71f051e135
Updating to stable@2024.08.06 from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp.exe from https://github.com/yt-dlp/yt-dlp/releases/download/2024.08.06/yt-dlp.exe
ERROR: Unable to move current version
Traceback (most recent call last):
File "yt_dlp\update.py", line 495, in update
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: '\\\\10.0.0.2\\Tools\\yt-dlp\\dist\\yt-dlp.exe' -> '\\\\10.0.0.2\\Tools\\yt-dlp\\dist\\yt-dlp.exe.old'
```
| enhancement,triage | low | Critical |
2,476,933,338 | PowerToys | RENAME SOME FILE AND JUMP AN ERROR | ### Microsoft PowerToys version
0.80.1
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
using explorer
enter a folder
rename a file AND
JUMP AN ERROR
### ✔️ Expected Behavior
fix this issue
### ❌ Actual Behavior
using explorer
enter a folder
rename a file AND
JUMP AN ERROR
### Other Software
windows file manager? explorer? | Issue-Bug,Needs-Triage | low | Critical |
2,476,940,462 | pytorch | operator benchmark diag test cuda problem | ### 🐛 Describe the bug
when run diag test in operator benchmark, got device problem with cuda, error info is below
```
(py310_pt20) hzeng@dell:~/prj/pytorch/benchmarks/operator_benchmark$ python -m benchmark_all_test --operators diag --omp-num-threads 1 --mkl-num-threads 1 --device cuda --tag_filter all
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : all
# Benchmarking PyTorch: diag
Traceback (most recent call last):
File "/home/hzeng/miniconda3/envs/py310_pt20/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/hzeng/miniconda3/envs/py310_pt20/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/hzeng/prj/pytorch/benchmarks/operator_benchmark/benchmark_all_test.py", line 9, in <module>
op_bench.benchmark_runner.main()
File "/home/hzeng/prj/pytorch/benchmarks/operator_benchmark/benchmark_runner.py", line 168, in main
benchmark_core.BenchmarkRunner(args).run()
File "/home/hzeng/prj/pytorch/benchmarks/operator_benchmark/benchmark_core.py", line 430, in run
launch_func(
File "/home/hzeng/prj/pytorch/benchmarks/operator_benchmark/benchmark_core.py", line 276, in _launch_forward
forward_time = timeit.timeit(
File "/home/hzeng/miniconda3/envs/py310_pt20/lib/python3.10/timeit.py", line 234, in timeit
return Timer(stmt, setup, timer, globals).timeit(number)
File "/home/hzeng/miniconda3/envs/py310_pt20/lib/python3.10/timeit.py", line 178, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
File "/home/hzeng/prj/pytorch/benchmarks/operator_benchmark/benchmark_pytorch.py", line 160, in run_forward
self.output = self.op_bench.forward_impl()
File "/home/hzeng/prj/pytorch/benchmarks/operator_benchmark/benchmark_pytorch.py", line 66, in forward_impl
return self.forward(*self.get_inputs())
File "/home/hzeng/prj/pytorch/benchmarks/operator_benchmark/pt/diag_test.py", line 40, in forward
return torch.diag(input, diagonal=diagonal, out=out_tensor)
RuntimeError: Expected out tensor to have device cuda:0, but got cpu instead
```
and caused by cpu data used by out_tensor, to fix this , make out_tensor device same as input, like this(change file pt/diag_test.py)
```
def forward(self, input, diagonal: int, out: bool, out_tensor):
out_tensor = out_tensor.to(input.device)
if out:
return torch.diag(input, diagonal=diagonal, out=out_tensor)
else:
return torch.diag(input, diagonal=diagonal)
```
### Versions
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.3) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.28.1
Libc version: glibc-2.31
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.3.103
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 552.41
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700K
Stepping: 1
CPU MHz: 3417.598
BogoMIPS: 6835.19
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 576 KiB
L1i cache: 384 KiB
L2 cache: 24 MiB
L3 cache: 30 MiB
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.1
[pip3] torchdata==0.6.0
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchaudio 2.0.1 pypi_0 pypi
[conda] torchdata 0.6.0 pypi_0 pypi
[conda] torchtext 0.15.1 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi | triaged,op-bench,module: benchmark | low | Critical |
2,476,997,245 | flutter | Keyevent usb-hid code and AltGr | ### Steps to reproduce
Run the example code on macOS.
### Expected results
1. `PhysicalKeyboardKey`. Usb hid code is `0x68`, but the code should be `0x46` https://www.usb.org/sites/default/files/documents/hut1_12v2.pdf

2. `LogicalKeyboardKey`. Germany (DEU) input. `AltGr` will nerge be generated, instead it generate `Alt Right`.
### Actual results
See above
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Keyboard Event Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const KeyboardEventPage(),
);
}
}
class KeyboardEventPage extends StatefulWidget {
const KeyboardEventPage({super.key});
@override
_KeyboardEventPageState createState() => _KeyboardEventPageState();
}
class _KeyboardEventPageState extends State<KeyboardEventPage> {
final List<String> _lastKeyEvents = [];
final FocusNode _focusNode = FocusNode();
@override
Widget build(BuildContext context) {
final child = Scaffold(
appBar: AppBar(
title: const Text('Keyboard Event Demo'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'Press any key:',
style: TextStyle(fontSize: 20),
),
const SizedBox(height: 20),
for (final keyEvent in _lastKeyEvents)
Text(
keyEvent,
style: const TextStyle(fontSize: 16),
),
],
),
),
);
return FocusScope(
autofocus: true,
child: Focus(
autofocus: true,
canRequestFocus: true,
focusNode: _focusNode,
onKeyEvent: (node, event) {
setState(() {
_lastKeyEvents.add(event.toString());
if (_lastKeyEvents.length > 10) {
_lastKeyEvents.removeAt(0);
}
});
return KeyEventResult.handled;
},
child: child,
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.19.6, on macOS 12.7.5 21H1222 darwin-x64, locale en-US)
• Flutter version 3.19.6 on channel stable at /Users/admin/workspace/devenv/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 54e66469a9 (4 months ago), 2024-04-17 13:08:03 -0700
• Engine revision c4cd48e186
• Dart version 3.3.4
• DevTools version 2.31.1
• Pub download mirror https://mirrors.tuna.tsinghua.edu.cn/dart-pub
• Flutter download mirror https://mirrors.tuna.tsinghua.edu.cn/flutter
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[!] Xcode - develop for iOS and macOS (Xcode 14.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 14C18
! CocoaPods 1.12.1 out of date (1.13.0 is recommended).
CocoaPods is used to retrieve the iOS and macOS platform side's plugin code that responds to your plugin usage on the Dart
side.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/platform-plugins
To upgrade see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods for instructions.
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions).
[✓] VS Code (version 1.92.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.94.0
[✓] VS Code (version 1.73.1)
• VS Code at /Users/admin/Downloads/Visual Studio Code.app/Contents
• Flutter extension version 3.94.0
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-x64 • macOS 12.7.5 21H1222 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.120
[!] Network resources
✗ An HTTP error occurred while checking "https://github.com/": Connection closed before full header was received
! Doctor found issues in 4 categories.
```
</details>
| a: text input,framework,a: internationalization,has reproducible steps,P2,team-framework,triaged-framework,fyi-text-input,found in release: 3.24,found in release: 3.25 | low | Critical |
2,477,011,178 | go | cmd/compile: inaccurate compiler error on duplicate wasmexport symbol | ### Go version
tip
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/home/johan/.cache/go-build'
GOENV='/home/johan/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/johan/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/johan/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/johan/src/tip/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/johan/src/tip/go/pkg/tool/linux_arm64'
GOVCS=''
GOVERSION='devel go1.24-24fd1a043d Tue Aug 20 23:11:53 2024 +0000'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/johan/.config/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='0'
GOMOD='/home/johan/src/tip/go/test/wasmexport/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2688758080=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
I created a package that exported two identical symbols (in the same package) using the go:wasmexport compiler directive.
```go
package main
//go:wasmexport A
func A() int64 {
return 10
}
//go:wasmexport A
func B() int64 {
return 10
}
func main() {
}
```
### What did you see happen?
I saw a compiler error:
```shell
$ GOARCH=wasm GOOS=wasip1 go build -buildmode=c-shared -o x.wasm main.go
main.go:5:2: <autogenerated>:1: symbol A redeclared
<unknown line number>: other declaration of symbol A
```
### What did you expect to see?
The error message should be able to point to the line number of both symbols:
```shell
$ GOARCH=wasm GOOS=wasip1 go build -buildmode=c-shared -o x.wasm main.go
main.go:5:2: <autogenerated>:1: symbol A redeclared
main.go:10:2: other declaration of symbol A
```
It is also notable that `main.go:5` is the location of the return, not the compiler directive. | NeedsFix,compiler/runtime | low | Critical |
2,477,063,953 | flutter | Deobfuscated stack-traces should show package path like normal stack-traces | ### Use case
When using the `--symbolize` option with an obfuscated stack-trace, the source-code paths appear to be absolute.
For example:
```
at #00 abs 00000070889bcc93 virt 000000000018fc93 _kDartIsolateSnapshotInstructions+0xb9153 (unparsed:null)
at #01 abs 00000070889464fb virt 00000000001194fb _kDartIsolateSnapshotInstructions+0x429bb (unparsed:null)
```
becomes:
```
at #0 _MyAppState.build.<anonymous closure> (/path/to/my/my_app/lib/main.dart:89:17)
at #1 _InkResponseState.handleTap (/path/to/my/sdk/flutter/packages/flutter/lib/src/material/ink_well.dart:1170:21)
```
In a normal (not obfuscated) stack-trace, the paths are all refer to the packages:
e.g.
```
at _MyAppState.build.<fn> line 113, column 24 (package:my_app/main.dart:113)
at _InkResponseState.handleTap line 1170, column 21 (package:flutter/src/material/ink_well.dart:1170)
```
### Proposal
Looking at the symbols file with a hex editor, I can see that the paths are all absolute to my machine.
I wonder if it would be possible to store package relative paths instead in the symbol files, as they can be observed in non-obfuscated stack-traces.
Main reasons for this request are:
- Stack-traces are consistent between obfuscated and non-obfuscated apps.
- Stack-traces are consistent between machines/setups (as paths won't matter anymore).
- Shorter package relative paths are easier to read.
| c: new feature,tool,c: proposal,P3,team-tool,triaged-tool | low | Minor |
2,477,066,842 | PowerToys | Do not work AI-Paste on Excel sheet. | ### Microsoft PowerToys version
0.83.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Advanced Paste
### Steps to reproduce
When simply using the AI-Paste function in the Excel application.
### ✔️ Expected Behavior
The value returned from AI should be pasted into the selected cell/cells.
### ❌ Actual Behavior
The value returned from AI could not be pasted into the selected cell/cells.
This might be because the clipboard value was occupied by the Excel.Range value selected for copying.
### Other Software
Windows 10 pro, Microsoft 365 Apps for enterprise, Excel (Desctop App.). | Issue-Bug,Needs-Triage | low | Minor |
2,477,135,409 | transformers | Trainer.model.push_to_hub() does not allow a private repository flag | ### System Info
As the title described.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use the function Trainer.model.push_to_hub().
### Expected behavior
Trainer.model.push_to_hub() allows a private repository flag. | trainer,Feature request | low | Major |
2,477,189,405 | yt-dlp | [XiaoHongShu] ERROR: No video formats found! | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Hi admin and supporter. i want to ask one question. how to download from xiaohongshu.com?
yt-dlp -vU --cookies "cookies.txt" https://www.xiaohongshu.com/explore/66a981250000000009017750
[debug] Command-line config: ['-vU', '--cookies', 'cookies.txt', 'https://www.xiaohongshu.com/explore/66a981250000000009017750']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.07.09.232843 from yt-dlp/yt-dlp-nightly-builds [d2189d3d3] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.3.2-2021-02-02-full_build-www.gyan.dev, ffprobe 4.2.1, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1834 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.08.19.232821/SHA2-256SUMS
Current version: nightly@2024.07.09.232843 from yt-dlp/yt-dlp-nightly-builds
Latest version: nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds
Current Build Hash: dec09e67683c1f1b019ff09c184234468576f9dc9b9c4d5f1ff992f9f56911f5
Updating to nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds ...
[debug] Downloading yt-dlp.exe from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.08.19.232821/yt-dlp.exe
Updated yt-dlp to nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds
[debug] Restarting: C:\bin\yt-dlp.exe -vU --cookies cookies.txt https://www.xiaohongshu.com/explore/66a981250000000009017750
[debug] Command-line config: ['-vU', '--cookies', 'cookies.txt', 'https://www.xiaohongshu.com/explore/66a981250000000009017750']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds [f0bb28504] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.3.2-2021-02-02-full_build-www.gyan.dev, ffprobe 4.2.1, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds)
[XiaoHongShu] Extracting URL: https://www.xiaohongshu.com/explore/66a981250000000009017750
[XiaoHongShu] 66a981250000000009017750: Downloading webpage
WARNING: Extractor failed to obtain "title". Creating a generic title instead
ERROR: [XiaoHongShu] 66a981250000000009017750: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1782, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1841, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 2847, in process_video_result
File "yt_dlp\YoutubeDL.py", line 1123, in raise_no_formats
yt_dlp.utils.ExtractorError: [XiaoHongShu] 66a981250000000009017750: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU --cookies "cookies.txt" https://www.xiaohongshu.com/explore/66a981250000000009017750
[debug] Command-line config: ['-vU', '--cookies', 'cookies.txt', 'https://www.xiaohongshu.com/explore/66a981250000000009017750']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.07.09.232843 from yt-dlp/yt-dlp-nightly-builds [d2189d3d3] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.3.2-2021-02-02-full_build-www.gyan.dev, ffprobe 4.2.1, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1834 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.08.19.232821/SHA2-256SUMS
Current version: nightly@2024.07.09.232843 from yt-dlp/yt-dlp-nightly-builds
Latest version: nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds
Current Build Hash: dec09e67683c1f1b019ff09c184234468576f9dc9b9c4d5f1ff992f9f56911f5
Updating to nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds ...
[debug] Downloading yt-dlp.exe from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.08.19.232821/yt-dlp.exe
Updated yt-dlp to nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds
[debug] Restarting: C:\bin\yt-dlp.exe -vU --cookies cookies.txt https://www.xiaohongshu.com/explore/66a981250000000009017750
[debug] Command-line config: ['-vU', '--cookies', 'cookies.txt', 'https://www.xiaohongshu.com/explore/66a981250000000009017750']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds [f0bb28504] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.3.2-2021-02-02-full_build-www.gyan.dev, ffprobe 4.2.1, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds)
[XiaoHongShu] Extracting URL: https://www.xiaohongshu.com/explore/66a981250000000009017750
[XiaoHongShu] 66a981250000000009017750: Downloading webpage
WARNING: Extractor failed to obtain "title". Creating a generic title instead
ERROR: [XiaoHongShu] 66a981250000000009017750: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1782, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1841, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 2847, in process_video_result
File "yt_dlp\YoutubeDL.py", line 1123, in raise_no_formats
yt_dlp.utils.ExtractorError: [XiaoHongShu] 66a981250000000009017750: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
| site-bug,triage | low | Critical |
2,477,190,220 | deno | WebSocket leaves TCP connection open after calling close | Version: Deno 1.45.5
Steps: start the server, start the client, wait for the client to call method `close`. Check the connections, I use `ss -atn | grep 3333` on linux (can use `lsof -i TCP:3333` on macos). I still see the tcp connection established and it’s still there several minutes after.
This looks unexpected to me. My use case - lets say there is some server issue so I wasn’t able to open a ws connection after a few seconds. I would keep trying to close and open another ws connection (with some delay). But that would keep leaking TCP connections.
```
LISTEN 0 128 127.0.0.1:3333 0.0.0.0:*
ESTAB 0 0 127.0.0.1:45680 127.0.0.1:3333
ESTAB 0 0 127.0.0.1:3333 127.0.0.1:45680
```
### Repro scripts
Server - establishes TCP connections and prints received data to stdout, does not implement web socket protocol
```typescript
const listener = Deno.listen({
hostname: '127.0.0.1',
port: 3333,
transport: 'tcp',
});
for await (const conn of listener) {
console.log('connected');
readLoop(conn);
}
async function readLoop(conn: Deno.TcpConn) {
const buf = new Uint8Array(1024 * 1024);
const decoder = new TextDecoder();
while (true) {
const nread = await conn.read(buf);
if (nread === null) {
console.log('eof');
break;
}
const txt = decoder.decode(buf.slice(0, nread));
console.log('read', txt);
}
conn.close();
}
```
Client - opens a websocket connection and closes it after 1 second regardless of the result.
```typescript
const socket = new WebSocket('http://localhost:3333');
setTimeout(() => {
socket.close();
console.log('Called close');
}, 1000);
``` | bug,ext/websocket | low | Major |
2,477,246,401 | react | [DevTools Bug]: inspecting pseudo-elements in Firefox gives error `Permission denied to access property "__reactFiber$sofadm08s2"` | ### Website or app
https://wk82tp.csb.app/
### Repro steps
1. Go to the Sandbox link in Firefox (129.0.2), with React DevTools (5.3.1)
2. Open the inspector.
3. Select the `::after` pseudo-element next to the `<h1>`.
### More Info
I add a screenshot of the issue.
<img width="1305" alt="Screenshot 2024-08-21 at 09 06 24" src="https://github.com/user-attachments/assets/d3e869af-80f1-44d5-a9ec-a47ed8472974">
This is happening from a while. Of course, is not a big issue, because if I close the error on the main screen, I can keep working. And until now I wasn't using much of pseudo-elements.
But now, I'm working on a new feature in our app, where I'm relying hardly on `::after` en `::before` pseudo elements. And it is quite annoying.
This is happening since DevTools 4.28.4. Until v4.27.8 I didn't had problems. On this version (4.27.8) the error is shown only when opening the tab "(React) Components" in the inspector... which kind of make sense, right?.
What doesn't make sense (to me) is that I got this error when inspecting the pseudo-element in the basic firefox inspector (trying to apply css properties on it).
### How often does this bug happen?
Every time
| Type: Bug,Status: Unconfirmed,Component: Developer Tools | low | Critical |
2,477,306,180 | pytorch | Augmented ops (like +=) fails on export | ### 🐛 Describe the bug
Issue redirected from [Augmented assignment op (+=) fails export](https://github.com/pytorch/executorch/issues/4630)
Op += seems to fails for tensors. Below snipped is a minimal case reconstructing the problem:
```py
import torch
from torch import nn
from torch.export import export
from executorch.exir import EdgeCompileConfig, to_edge
class TestModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, a: torch.Tensor, b: torch.Tensor, c: torch.Tensor, d: torch.Tensor) -> torch.Tensor:
invalid_a = torch.eq(a, -1)
valid_a = torch.eq(a, 1)
# First (works)
# b = b + c * invalid_a
# b = b + d * valid_a
# Second (works)
# b = torch.where(invalid_a, b, b + c)
# b = torch.where(valid_a, b, b + d)
# Third (fails)
b[invalid_a] += c
b[valid_a] += d
return b
if __name__ == "__main__":
model = TestModel()
example_arguments = (torch.Tensor([1, -1, 1, -1]), torch.Tensor([0, 0, 0, 0]), torch.Tensor([1, 2, 3, 4]), torch.Tensor([4, 3, 2, 1]),)
prog = export(model, example_arguments)
edge = to_edge(prog, compile_config=EdgeCompileConfig(_check_ir_validity=False, _skip_dim_order=True),)
exec_prog = edge.to_executorch()
```
The export cause the following error:
```
W0809 14:35:01.298000 8394476544 torch/fx/experimental/symbolic_shapes.py:5140] [0/0] failed during evaluate_expr(Ne(u0, 4), hint=None, expect_rational=True, size_oblivious=False, forcing_spec=False
E0809 14:35:01.298000 8394476544 torch/fx/experimental/recording.py:298] [0/0] failed while running evaluate_expr(*(Ne(u0, 4), None), **{'fx_node': None})
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] failed while attempting to run meta for aten.add_.Tensor
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] Traceback (most recent call last):
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1878, in _dispatch_impl
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] r = func(*args, **kwargs)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] ^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/_ops.py", line 727, in __call__
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] return self_._op(*args, **kwargs)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 3582, in meta_binop_inplace_alpha
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] check_inplace_broadcast(self.shape, other.shape)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 86, in check_inplace_broadcast
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] broadcasted_shape = tuple(_broadcast_shapes(self_shape, *args_shape))
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] if common_shape[idx] != shape[idx]:
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/__init__.py", line 672, in __bool__
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] return self.node.bool_()
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] ^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 496, in bool_
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] return self.guard_bool("", 0)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] ^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 434, in guard_bool
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/fx/experimental/recording.py", line 262, in wrapper
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] return retlog(fn(*args, **kwargs))
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] ^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5138, in evaluate_expr
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] return self._evaluate_expr(orig_expr, hint, fx_node, expect_rational, size_oblivious, forcing_spec=forcing_spec)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5256, in _evaluate_expr
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] raise self._make_data_dependent_error(
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Ne(u0, 4) (unhinted: Ne(u0, 4)). (Size-like symbols: u0)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] Potential framework code culprit (scroll up for full backtrace):
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] if common_shape[idx] != shape[idx]:
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] For more information, run with TORCH_LOGS="dynamic"
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] User Stack (most recent call last):
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] (snipped, see stack below for prefix)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] File "~/test_executorch.py", line 23, in forward
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] b[invalid_a] += c
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
Traceback (most recent call last):
File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1943, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1143, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1559, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1240, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1878, in _dispatch_impl
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_ops.py", line 727, in __call__
return self_._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 3582, in meta_binop_inplace_alpha
check_inplace_broadcast(self.shape, other.shape)
File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 86, in check_inplace_broadcast
broadcasted_shape = tuple(_broadcast_shapes(self_shape, *args_shape))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
if common_shape[idx] != shape[idx]:
File "~/lib/python3.11/site-packages/torch/__init__.py", line 672, in __bool__
return self.node.bool_()
^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 496, in bool_
return self.guard_bool("", 0)
^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 434, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/recording.py", line 262, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5138, in evaluate_expr
return self._evaluate_expr(orig_expr, hint, fx_node, expect_rational, size_oblivious, forcing_spec=forcing_spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5256, in _evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Ne(u0, 4) (unhinted: Ne(u0, 4)). (Size-like symbols: u0)
Potential framework code culprit (scroll up for full backtrace):
File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
if common_shape[idx] != shape[idx]:
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "~/test_executorch.py", line 23, in forward
b[invalid_a] += c
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1825, in get_fake_value
ret_val = wrap_fake_exception(
^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1317, in wrap_fake_exception
return fn()
^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1826, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1961, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1943, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1143, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1559, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1240, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1878, in _dispatch_impl
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_ops.py", line 727, in __call__
return self_._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 3582, in meta_binop_inplace_alpha
check_inplace_broadcast(self.shape, other.shape)
File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 86, in check_inplace_broadcast
broadcasted_shape = tuple(_broadcast_shapes(self_shape, *args_shape))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
if common_shape[idx] != shape[idx]:
File "~/lib/python3.11/site-packages/torch/__init__.py", line 672, in __bool__
return self.node.bool_()
^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 496, in bool_
return self.guard_bool("", 0)
^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 434, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/recording.py", line 262, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5138, in evaluate_expr
return self._evaluate_expr(orig_expr, hint, fx_node, expect_rational, size_oblivious, forcing_spec=forcing_spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5256, in _evaluate_expr
raise self._make_data_dependent_error(
RuntimeError: Failed running call_function <built-in function iadd>(*(FakeTensor(..., size=(u0,)), FakeTensor(..., size=(4,))), **{}):
Could not guard on data-dependent expression Ne(u0, 4) (unhinted: Ne(u0, 4)). (Size-like symbols: u0)
Potential framework code culprit (scroll up for full backtrace):
File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
if common_shape[idx] != shape[idx]:
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "~/test_executorch.py", line 23, in forward
b[invalid_a] += c
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "~/test_executorch.py", line 33, in <module>
prog = export(model, example_arguments)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/export/__init__.py", line 173, in export
return _export(
^^^^^^^^
File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 991, in wrapper
raise e
File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 974, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/export/exported_program.py", line 100, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 1863, in _export
export_artifact = export_func(
^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 1107, in _strict_export
return _strict_export_lower_to_aten_ir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 1137, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 544, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1386, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1716, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1727, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 435, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1716, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1727, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1121, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 472, in __call__
return _compile(
^^^^^^^^^
File "~/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 817, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 240, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_utils_internal.py", line 85, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 636, in compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1280, in transform_code_object
transformations(instructions, code_options)
File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 178, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 581, in transform
tracer.run()
File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2498, in run
super().run()
File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 908, in run
while self.step():
^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 820, in step
self.dispatch_table[inst.opcode](self, inst)
File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2064, in BINARY_OP
return _binary_op_lookup[inst.arg](self, inst)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 234, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/variables/builtin.py", line 963, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/variables/builtin.py", line 942, in _handle_insert_op_in_graph
return wrap_fx_proxy(tx, proxy)
^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 1849, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 1936, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1880, in get_fake_value
raise UserError( # noqa: B904
torch._dynamo.exc.UserError: Tried to use data-dependent value in the subsequent computation. This can happen when we encounter unbounded dynamic value that is unknown during tracing time. You will need to explicitly give hint to the compiler. Please take a look at torch._check OR torch._check_is_size APIs. Could not guard on data-dependent expression Ne(u0, 4) (unhinted: Ne(u0, 4)). (Size-like symbols: u0)
Potential framework code culprit (scroll up for full backtrace):
File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
if common_shape[idx] != shape[idx]:
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "~/test_executorch.py", line 23, in forward
b[invalid_a] += c
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example
from user code:
File "~/test_executorch.py", line 23, in forward
b[invalid_a] += c
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0.dev20240716
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.1
Libc version: N/A
Python version: 3.11.0 (main, Mar 1 2023, 12:33:14) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] executorch==0.4.0a0+1114539
[pip3] executorchcoreml==0.0.1
[pip3] numpy==1.23.2
[pip3] torch==2.5.0.dev20240716
[pip3] torchaudio==2.4.0.dev20240716
[pip3] torchsr==1.0.4
[pip3] torchvision==0.20.0.dev20240716
[conda] executorch 0.4.0a0+11b2fcb pypi_0 pypi
[conda] executorchcoreml 0.0.1 pypi_0 pypi
[conda] numpy 1.23.2 pypi_0 pypi
[conda] torch 2.5.0.dev20240716 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240716 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240716 pypi_0 pypi
cc @ezyang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,module: dynamic shapes,oncall: export | low | Critical |
2,477,346,788 | ui | [feat]: Add tag input component | ### Feature description
A component which lets us add tag input with and without autocomplete.
### Without auto complete
https://github.com/user-attachments/assets/0d2af048-e644-4b21-bd31-f5b3a8dad4b2
### With auto complete
https://github.com/user-attachments/assets/b5a0c56d-4b50-4ee4-8b92-ac256f2331fe
But with create new tag too.
Please let me know, if i can add it.
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Major |
2,477,351,586 | ui | [feat]: Datepicker component disabled prop. | ### Feature description
Add disabled feature to date picker component.
### Affected component/components
Date Picker
### Additional Context

### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,477,364,018 | langchain | Langchain document loader giving "Resource punkt_tab not found" error | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
loader = AzureBlobStorageFileLoader(
conn_str=conn_str,
container=container,
blob_name=blob,
)
document = loader.load()```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
- I am trying to use Langchain to load the documents using `AzureBlobStorageFileLoader`.
- When loading the document I get an error related to nltk that seems upstream to langchain
- I could fix the problem temporarily by using a downgraded version of nltk. `nltk == 3.8.1`

### System Info
langchain==0.2.12
langchain-community==0.2.11
langchain-core==0.2.29
langchain-experimental==0.0.36
langchain-text-splitters==0.2.2
Platform: Ubuntu WSL2 on Windows 10
Containerisation: Docker version 27.0.2, build 912c1dd
Python: Python 3.10.12 | Ɑ: doc loader,🤖:bug | low | Critical |
2,477,393,071 | rust | #[inline(never)] does not work for async functions |
I would expect [the following code](https://godbolt.org/z/35KMqqWjP) produced a `Future::poll()` impl with the `noinline` attribute.
```rust
#![feature(noop_waker)]
use std::task::{Context, Waker, Poll};
use std::future::Future;
#[inline(never)]
pub async fn foo() -> u32 {
std::hint::black_box(1)
}
pub fn bar() -> Poll<u32> {
let f = std::pin::pin!(foo());
f.poll(&mut Context::from_waker(Waker::noop()))
}
```
Instead, the poll method has `inlinehint` applied to it and gets inlined accordingly:
```llvm
; This is the function creating the async fn obj:
; Function Attrs: noinline nonlazybind uwtable
define i8 @_RNvCs8Nh0J9OTdDr_15async_fn_inline3foo() unnamed_addr #2 {
%1 = alloca [1 x i8], align 1
store i8 0, ptr %1, align 1
%2 = load i8, ptr %1, align 1
ret i8 %2
}
; This the poll method:
; Function Attrs: inlinehint nonlazybind uwtable <== this should arguably be noinline too
define internal { i32, i32 } @_RNCNvCs8Nh0J9OTdDr_15async_fn_inline3foo0B3_(...)
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0-nightly (636d7ff91 2024-08-19)
binary: rustc
commit-hash: 636d7ff91b9847d6d43c7bbe023568828f6e3246
commit-date: 2024-08-19
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
```
| A-LLVM,A-codegen,T-lang,C-bug,disposition-merge,finished-final-comment-period,A-async-await,AsyncAwait-Triaged,S-has-mcve | medium | Critical |
2,477,396,173 | vscode | Flicker'ish test tree | * vscode selfhost
* filter text tree by `inlineChat`
* select run all
* 🐛 the progress items flicker a lot
https://github.com/user-attachments/assets/f7687f11-d796-4865-97d6-8550f49451f7
| polish,testing | low | Minor |
2,477,397,858 | pytorch | [inductor][cpu]performance regression in 2024-08-18 nightly release | ### 🐛 Describe the bug
<p>fp32 static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>sebotnet33ts_256</td>
<td>single</td>
<td>1</td>
<td>0.941899</td>
<td>0.100319376</td>
<td>0.094490719935024</td>
<td>32.881608</td>
<td>1</td>
<td>1.094253</td>
<td>0.087828729</td>
<td>0.09610685019443699</td>
<td>33.36887</td>
<td>0.86</td>
<td>1.02</td>
<td>0.88</td>
<td>1.01</td>
</tr>
</tbody>
</table>
<p>fp32 dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>sebotnet33ts_256</td>
<td>single</td>
<td>1</td>
<td>0.946758</td>
<td>0.100703223</td>
<td>0.095341582001034</td>
<td>32.874255</td>
<td>1</td>
<td>1.084058</td>
<td>0.088318301</td>
<td>0.095742160745458</td>
<td>33.333709</td>
<td>0.87</td>
<td>1.0</td>
<td>0.88</td>
<td>1.01</td>
</tr>
</tbody>
</table>
<p>fp32 static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>sebotnet33ts_256</td>
<td>single</td>
<td>1</td>
<td>0.968331</td>
<td>0.09805517999999999</td>
<td>0.09494987050458</td>
<td>32.854349</td>
<td>1</td>
<td>1.110006</td>
<td>0.085936147</td>
<td>0.095389638786882</td>
<td>34.213632</td>
<td>0.87</td>
<td>1.0</td>
<td>0.88</td>
<td>1.04</td>
</tr>
</tbody>
</table>
<p>fp32 dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>sebotnet33ts_256</td>
<td>single</td>
<td>1</td>
<td>0.961279</td>
<td>0.099319203</td>
<td>0.095473464140637</td>
<td>33.499412</td>
<td>1</td>
<td>1.093006</td>
<td>0.08757168900000001</td>
<td>0.095716381507134</td>
<td>34.693043</td>
<td>0.88</td>
<td>1.0</td>
<td>0.88</td>
<td>1.04</td>
</tr>
</tbody>
</table>
<p>amp static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>Super_SloMo</td>
<td>multiple</td>
<td>6</td>
<td>1.095528</td>
<td>0.288202603</td>
<td>0.31573402125938405</td>
<td>46.452687</td>
<td>6</td>
<td>1.838708</td>
<td>0.171210137</td>
<td>0.31480544858299603</td>
<td>45.100434</td>
<td>0.6</td>
<td>1.0</td>
<td>0.59</td>
<td>0.97</td>
</tr>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.390787</td>
<td>0.028069921999999997</td>
<td>0.010969360608613999</td>
<td>24.996125</td>
<td>64</td>
<td>0.924846</td>
<td>0.013657453</td>
<td>0.012631040777238</td>
<td>25.521037</td>
<td>0.42</td>
<td>1.15</td>
<td>0.49</td>
<td>1.02</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.443432</td>
<td>0.028011531000000003</td>
<td>0.012421209214392001</td>
<td>8.736755</td>
<td>64</td>
<td>1.038116</td>
<td>0.01368588</td>
<td>0.01420753100208</td>
<td>8.789735</td>
<td>0.43</td>
<td>1.14</td>
<td>0.49</td>
<td>1.01</td>
</tr>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>single</td>
<td>1</td>
<td>4.277774</td>
<td>0.0021903350000000003</td>
<td>0.009369758114290002</td>
<td>24.45413</td>
<td>1</td>
<td>4.88962</td>
<td>0.00191102</td>
<td>0.0093441616124</td>
<td>24.958059</td>
<td>0.87</td>
<td>1.0</td>
<td>0.87</td>
<td>1.02</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>single</td>
<td>1</td>
<td>4.183307</td>
<td>0.0022201729999999998</td>
<td>0.009287665252111</td>
<td>8.271827</td>
<td>1</td>
<td>5.063267</td>
<td>0.001907772</td>
<td>0.009659559011124</td>
<td>8.373175</td>
<td>0.83</td>
<td>1.04</td>
<td>0.86</td>
<td>1.01</td>
</tr>
</tbody>
</table>
<p>amp dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>Super_SloMo</td>
<td>multiple</td>
<td>6</td>
<td>0.932181</td>
<td>0.443164155</td>
<td>0.413109205172055</td>
<td>106.060916</td>
<td>6</td>
<td>1.774123</td>
<td>0.23200224600000002</td>
<td>0.41160052068025804</td>
<td>100.355055</td>
<td>0.53</td>
<td>1.0</td>
<td>0.52</td>
<td>0.95</td>
</tr>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.404284</td>
<td>0.036586737</td>
<td>0.014791432381308</td>
<td>51.458506</td>
<td>64</td>
<td>0.698939</td>
<td>0.021733404</td>
<td>0.015190323658356</td>
<td>52.778067</td>
<td>0.58</td>
<td>1.03</td>
<td>0.59</td>
<td>1.03</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.445277</td>
<td>0.036695756</td>
<td>0.016339776144412</td>
<td>20.359369</td>
<td>64</td>
<td>0.772134</td>
<td>0.021906202</td>
<td>0.016914523375067998</td>
<td>20.647732</td>
<td>0.58</td>
<td>1.04</td>
<td>0.6</td>
<td>1.01</td>
</tr>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>single</td>
<td>1</td>
<td>4.546662</td>
<td>0.003640919</td>
<td>0.016554028062378</td>
<td>45.433577</td>
<td>1</td>
<td>5.576555</td>
<td>0.003036971</td>
<td>0.016935835814904997</td>
<td>45.576202</td>
<td>0.82</td>
<td>1.02</td>
<td>0.83</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>single</td>
<td>1</td>
<td>4.579087</td>
<td>0.0036851370000000002</td>
<td>0.016874562929919002</td>
<td>15.097965</td>
<td>1</td>
<td>5.653265</td>
<td>0.0031027990000000003</td>
<td>0.017540944988735003</td>
<td>15.329049</td>
<td>0.81</td>
<td>1.04</td>
<td>0.84</td>
<td>1.02</td>
</tr>
</tbody>
</table>
<p>amp static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>Super_SloMo</td>
<td>multiple</td>
<td>6</td>
<td>1.080169</td>
<td>0.223970148</td>
<td>0.24192561079501199</td>
<td>39.103067</td>
<td>6</td>
<td>1.70284</td>
<td>0.14367540099999998</td>
<td>0.24465621983883995</td>
<td>37.379275</td>
<td>0.63</td>
<td>1.01</td>
<td>0.64</td>
<td>0.96</td>
</tr>
</tbody>
</table>
</table>
<p>amp dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>Super_SloMo</td>
<td>multiple</td>
<td>6.0</td>
<td>1.080956</td>
<td>0.227053398</td>
<td>0.245434732888488</td>
<td>47.118701</td>
<td>6</td>
<td>1.616331</td>
<td>0.15090221099999998</td>
<td>0.24390792160784097</td>
<td>45.247088</td>
<td>0.67</td>
<td>0.99</td>
<td>0.66</td>
<td>0.96</td>
</tr>
</tbody>
</table>
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>23512dbe</td>
<td>main</td>
<td>23512dbe</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>ae000635700e78161e0ed1a18f62b5db4030e343</td>
<td>main</td>
<td>a7912bf9dc39b934baf5e04b436cc2134776c10d</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.4.0a0+b3f6f51</td>
<td>main</td>
<td>2.4.0a0+b3f6f51</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob/main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh **thread** inference performance **suite** **model** float32/amp first static/dynamic default/cpp
Suspected guilty commit: 99b3b58f39507bb8ad5b4bb1b9bedf7f47b64fa3
[torchbench-Super_SloMo-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/16688400/torchbench-Super_SloMo-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log)
cc @ezyang @chauhang @penguinwu @WeizhuoZhang-intel @chuanqi129 | triaged,oncall: pt2,oncall: cpu inductor | low | Critical |
2,477,406,283 | PowerToys | ScreenNotes, Persistent transparent sketch layer(s) | ### Description of the new feature / enhancement
Always-on-top transparent canvas, disabled from interaction.
Hotkey1, Create or toggle Enabled overlay, screen extents, Show palette of tools.
- Dock screen, Full
Hotkey2, Create or toggle Enabled overlay, window extents, Show palette of tools.
- Dock window, Full
Palette of tools primarily colour highlighter, but could easily be expanded with a multitude of ideas.
### Scenario when this would be used?
When working in programs where you want to "complete" different task, but the program itself is not capable of structuring the operations it would be a nice solution to colour highlight parts that are done, or as a reminder that there is something left to be done in this area.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,477,469,535 | ui | [feat]: Skeleton component with auto-sizing | ### Feature description
Radix Themes got this super nice skeleton component which has auto-sizing based on the children and a controllable state:
https://www.radix-ui.com/themes/docs/components/skeleton
Feels overkill installing Radix Themes just for this one component though
Usage:
```tsx
<Skeleton>
{Number(500_000).toLocaleString()}
</Skeleton>
```
No need to manually specify the size, just put some placeholder data and the skeleton adjusts accordingly
With controllable state:
```tsx
<Skeleton loading={false}>
{Number(500_000).toLocaleString()}
</Skeleton>
```
would display the actual number
### Affected component/components
Skeleton
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Major |
2,477,477,797 | vscode | Git - wrong handling of .gitignore that contains the folder name | Type: <b>Bug</b>
if I create a folder named `test` and inside have test/.gitignore file that has the entry `test` , then vscode ignores the entire folder, while he should only ignore a file named `test/test`
VS Code version: Code 1.92.2 (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Windows_NT x64 10.0.22631
Modes:
Remote OS version: Linux x64 4.4.0-210-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i5-1235U (12 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.73GB (6.01GB free)|
|Process Argv|--crash-reporter-id 39b50b30-0b69-4d2a-85a9-0dd4641e19df|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: xangrila-hub-linux|
|OS|Linux x64 4.4.0-210-generic|
|CPUs|Common KVM processor (16 x 0)|
|Memory (System)|62.92GB (28.72GB free)|
|VM|0%|
</details><details><summary>Extensions (13)</summary>
Extension|Author (truncated)|Version
---|---|---
Bookmarks|ale|13.5.0
remote-ssh|ms-|0.113.1
remote-ssh-edit|ms-|0.86.0
remote-explorer|ms-|0.4.3
gitlens|eam|15.3.0
vscode-docker|ms-|1.29.2
debugpy|ms-|2024.10.0
python|ms-|2024.12.3
vscode-pylance|ms-|2024.8.1
cpptools|ms-|1.21.6
cpptools-extension-pack|ms-|1.3.0
ocaml-platform|oca|1.20.0
markdown-all-in-one|yzh|3.6.2
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonnoceb:30805159
asynctok:30898717
pythonregdiag2:30936856
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
01bff139:31013167
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl2:31104044
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31119336
wkspc-onlycs-c:31111717
wkspc-ranged-t:31118572
fje88620:31121564
```
</details>
<!-- generated by issue reporter --> | bug,git | low | Critical |
2,477,480,260 | TypeScript | Unions of assertion functions do not act as assertion functions | ### 🔎 Search Terms
narrowing, assertion function
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about everything
### ⏯ Playground Link
https://tsplay.dev/Wv3DYw
### 💻 Code
```ts
class box<T> {
constructor(public value: T){}
check(): this is box<string> {
return typeof this.value == 'string';
}
assert(): asserts this is box<string> {
if (typeof this.value != 'string') throw new Error();
}
private test() {
this.assert();
// type correctly narrowed
this.value.substring(0);
}
}
function make() : box<string> | box<number> {
return new box('a');
}
function assert(b: box<string> | box<number>): asserts b is box<string> {
if (typeof b.value != 'string') throw new Error();
}
const b = make();
if (b.check()) {
// type correctly narrowed
b.value.substring(0);
}
b.assert();
// type not narrowed (substring does not exist on type 'string | number')
b.value.substring(0);
assert(b);
// type correctly narrowed
b.value.substring(0);
```
### 🙁 Actual behavior
Type is not narrowed after method call
### 🙂 Expected behavior
I expected the type to be narrowed after method call since it is narrowed within other methods or when using a function external to the type.
### Additional information about the issue
https://stackoverflow.com/questions/78879014/typescript-type-assertion-does-not-narrow-class-instance-from-union | Bug,Help Wanted | low | Critical |
2,477,503,979 | pytorch | libtorch reasoned that the crnn model was out of speed the first few times | ### 🐛 Describe the bug
string torch_model_path = model_path + ".pt";
device_type = torch::kCUDA;
torch::Device device(device_type, gpuid);
torch::jit::script::Module net;
net = torch::jit::load(torch_model_path, device);
tensor_image = tensor_image.to(torch::kCUDA);
torch::NoGradGuard no_grad;
at::Tensor output = net.forward({ tensor_image }).toTensor();
output = output.to(torch::kCPU);
Do not use torch::NoGradGuard no_grad; The first reasoning time is more than 300 milliseconds, the second time is 47ms, the third time is more than 2000 milliseconds, and then the reasoning is normal; Using torch::NoGradGuard no_grad, the first inference time is more than 300 milliseconds, the second time is more than 2000 milliseconds, and then the inference is normal. How to solve this problem
### Versions
libtorch2.0.1 + cuda11.8 + windows + vs2017
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jbschlosser | oncall: jit,module: cpp,triaged | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.