id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,652,466,790 | vscode | GetBoundingClientRect zoom adjustments | With https://chromium-review.googlesource.com/c/chromium/src/+/5085708 which got enabled by default since Chromium 128 via https://chromium-review.googlesource.com/c/chromium/src/+/5693338, the rect values from `getBoundingClientRect` calls will be adjusted to the page zoom rather than css zoom values.
I have listed the callsites that need confirmation if they are prepared for this feature.
- [x] [extensions/markdown-language-features/preview-src/scroll-sync.ts](https://github.com/microsoft/vscode/blob/main/extensions/markdown-language-features/preview-src/scroll-sync.ts)
- [x] [vs/base/browser/dom.ts](https://github.com/microsoft/vscode/blob/main/src/vs/base/browser/dom.ts)
- [x] [vs/base/browser/iframe.ts](https://github.com/microsoft/vscode/blob/main/src/vs/base/browser/iframe.ts)
- [x] [vs/base/browser/ui/menu/menu.ts](https://github.com/microsoft/vscode/blob/main/src/vs/base/browser/ui/menu/menu.ts)
- [x] [vs/base/browser/ui/menu/menubar.ts](https://github.com/microsoft/vscode/blob/main/src/vs/base/browser/ui/menu/menubar.ts)
- [x] [vs/editor/browser/editorDom.ts](https://github.com/microsoft/vscode/blob/main/src/vs/editor/browser/editorDom.ts)
- [x] [vs/editor/browser/controller/mouseTarget.ts](https://github.com/microsoft/vscode/blob/main/src/vs/editor/browser/controller/mouseTarget.ts)
- [x] [vs/editor/browser/controller/editContext/native/nativeEditContext.ts](https://github.com/microsoft/vscode/blob/main/src/vs/editor/browser/controller/editContext/native/nativeEditContext.ts)
- [x] [vs/editor/browser/services/hoverService/hoverWidget.ts](https://github.com/microsoft/vscode/blob/main/src/vs/editor/browser/services/hoverService/hoverWidget.ts)
- [x] [vs/editor/browser/viewParts/contentWidgets/contentWidgets.ts](https://github.com/microsoft/vscode/blob/main/src/vs/editor/browser/viewParts/contentWidgets/contentWidgets.ts)
- [x] [vs/editor/browser/viewParts/minimap/minimap.ts](https://github.com/microsoft/vscode/blob/main/src/vs/editor/browser/viewParts/minimap/minimap.ts)
- [x] [vs/editor/browser/viewParts/viewLines/domReadingContext.ts](https://github.com/microsoft/vscode/blob/main/src/vs/editor/browser/viewParts/viewLines/domReadingContext.ts)
- [x] [vs/editor/contrib/suggest/browser/suggestWidgetDetails.ts](https://github.com/microsoft/vscode/blob/main/src/vs/editor/contrib/suggest/browser/suggestWidgetDetails.ts)
- [x] [vs/platform/actionWidget/browser/actionList.ts](https://github.com/microsoft/vscode/blob/main/src/vs/platform/actionWidget/browser/actionList.ts)
- [x] [vs/workbench/contrib/welcomeWalkthrough/browser/walkThroughPart.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/welcomeWalkthrough/browser/walkThroughPart.ts)
- [x] [vs/workbench/contrib/webview/browser/webviewElement.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/webview/browser/webviewElement.ts)
- [x] [vs/workbench/contrib/webview/browser/overlayWebview.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/webview/browser/overlayWebview.ts)
- [x] [vs/workbench/contrib/update/browser/releaseNotesEditor.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/update/browser/releaseNotesEditor.ts)
- [x] [vs/workbench/contrib/terminalContrib/suggest/browser/terminalSuggestAddon.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/terminalContrib/suggest/browser/terminalSuggestAddon.ts)
- [x] [vs/workbench/contrib/terminalContrib/stickyScroll/browser/terminalStickyScrollOverlay.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/terminalContrib/stickyScroll/browser/terminalStickyScrollOverlay.ts)
- [x] [vs/workbench/contrib/terminalContrib/quickFix/browser/quickFixAddon.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/terminalContrib/quickFix/browser/quickFixAddon.ts)
- [x] [vs/workbench/browser/parts/compositeBar.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/browser/parts/compositeBar.ts)
- [x] [vs/workbench/browser/parts/compositeBarActions.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/browser/parts/compositeBarActions.ts)
- [x] [vs/workbench/browser/parts/editor/editorPart.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/browser/parts/editor/editorPart.ts)
- [x] [vs/workbench/browser/parts/editor/multiEditorTabsControl.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/browser/parts/editor/multiEditorTabsControl.ts)
- [x] [vs/workbench/browser/parts/views/viewPaneContainer.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/browser/parts/views/viewPaneContainer.ts)
- [x] [vs/workbench/contrib/debug/browser/debugEditorContribution.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/debug/browser/debugEditorContribution.ts)
- [x] [vs/workbench/contrib/notebook/browser/notebookEditorWidget.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/notebook/browser/notebookEditorWidget.ts)
- [x] [vs/workbench/contrib/notebook/browser/view/cellParts/cellDnd.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/notebook/browser/view/cellParts/cellDnd.ts)
- [x] [vs/workbench/contrib/notebook/browser/view/renderers/backLayerWebView.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/notebook/browser/view/renderers/backLayerWebView.ts)
- [x] [vs/workbench/contrib/notebook/browser/view/renderers/webviewPreloads.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/notebook/browser/view/renderers/webviewPreloads.ts)
- [x] [vs/workbench/contrib/terminal/browser/terminalConfigurationService.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/terminal/browser/terminalConfigurationService.ts)
- [x] [vs/workbench/contrib/terminal/browser/terminalInstance.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/terminal/browser/terminalInstance.ts)
- [x] [vs/workbench/contrib/terminalContrib/commandGuide/browser/terminal.commandGuide.contribution.ts](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/terminalContrib/commandGuide/browser/terminal.commandGuide.contribution.ts) | debt,engineering | low | Critical |
2,652,477,785 | PowerToys | Accessibility request | ### Description of the new feature / enhancement
Simple option to play two sounds: One when your webcam turns on, and the other when it turns off.
### Scenario when this would be used?
Just an audio alert for blind users notifying them when their camera is in use/off.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,652,512,455 | godot | [Regression] ResourceLoader.LoadThreadedRequest(scenePath) stuck at 0.5, infinite loop. | ### Tested versions
NOTReproducible in:
v4.3 stable for Windows
Reprodcible in:
v4.3 stable for MacOs
v4.4.dev4.mono.official [36e6207bb] (Windows and MacOS)
For MacOS and Windows
### System information
Godot v4.4.dev4.mono - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 32.0.15.6094) - AMD Ryzen 7 3700X 8-Core Processor (16 threads)
### Issue description
When using
```
ResourceLoader.LoadThreadedRequest(scenePath);
```
This happened before and looks like its still happening:
https://github.com/godotengine/godot/issues/92844#issuecomment-2167849384
NOTE:
I'm calling the render server when loading TileSets, but I'm doing a RenderingServer.ForceSync();
I'm getting stuck at 0.5 progress and the progress never changes, maybe a recursive load? Or a race condition, I don't know, my debugger points here:

### Steps to reproduce
I'm loading the instance like this:
In MacOS it doesnt work at all it is always stuck (thats why we did the load in the same thread), on windows there is a race condition, so sometimes it works, sometime it doesnt.
```C#
public partial class Main2 : Node2D
{
public override void _Ready()
{
// Construct the path to the scene file
var scenePath = $"res://main.tscn";
var error = ResourceLoader.LoadThreadedRequest(scenePath);
GD.PrintErr($"Loading scene: {scenePath} {error}");
var percentageReport = new Godot.Collections.Array();
double percentage;
ResourceLoader.ThreadLoadStatus loadingStatus;
do
{
loadingStatus = ResourceLoader.LoadThreadedGetStatus(scenePath, percentageReport);
if (loadingStatus == ResourceLoader.ThreadLoadStatus.Failed)
{
GD.PrintErr($"Failed to load scene at path: {scenePath}");
return;
}
percentage = percentageReport[0].As<double>();
GD.PrintErr($"Loading scene: loading: {loadingStatus} scene: {scenePath} percentage: {percentage} {percentageReport}");
RenderingServer.ForceSync();
} while (loadingStatus == ResourceLoader.ThreadLoadStatus.InProgress);
GD.PrintErr($"Finished loading scene: {scenePath} {loadingStatus} {percentage}");
}
}
```
### Minimal reproduction project (MRP)
[bug-thread.zip](https://github.com/user-attachments/files/17846554/bug-thread.zip)
| bug,topic:core,topic:dotnet,regression | low | Critical |
2,652,518,902 | vscode | The editor could not be opened due to an unexpected error: Unable to resolve text model content for resource vscode-notebook-cell-output |
Type: <b>Bug</b>
I could not know what is the problem that keeps stopping the running on a specific cell. The cell need to run for 100 epochs, but it stops showing any progress in the epoch 0 after many hours of running to stop in the end with an error I cannot see. I updated json setting file, but also no change.
{
"jupyter.jupyterCommandLineArguments": [],
"interactiveWindow.executeWithShiftEnter": true,
"jupyter.widgetScriptSources": [
"jsdelivr.com",
"unpkg.com"
],
"files.autoSave": "afterDelay",
"json.schemas": [
],
"notebook.output.scrolling": true,
"notebook.stickyScroll.enabled": true,
"notebook.output.textLineLimit": 20000,
"auto_scroll_threshold": 9999
}
VS Code version: Code 1.95.2 (e8653663e8840adaf45af01eab5c627a5af81807, 2024-11-07T11:07:22.054Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 7840HS w/ Radeon 780M Graphics (16 x 3793)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|35.69GB (20.56GB free)|
|Process Argv|--crash-reporter-id 1013f2d0-c5bf-4e17-ae09-cff9e0b7b7b5|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (17)</summary>
Extension|Author (truncated)|Version
---|---|---
my-nbpreviewer|col|1.2.2
remotehub|Git|0.64.0
debugpy|ms-|2024.12.0
python|ms-|2024.18.1
vscode-pylance|ms-|2024.11.1
jupyter|ms-|2024.10.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
azure-repos|ms-|0.40.0
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
remote-repositories|ms-|0.42.0
jupyter-notebook-vscode|sam|0.0.2
cmake|twx|0.0.17
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
724cj586:31013169
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
```
</details>
<!-- generated by issue reporter --> | bug,notebook | low | Critical |
2,652,564,399 | PowerToys | Can't Uninstall Older Versions of PowerToys on Windows 11 | ### Microsoft PowerToys version
0.74.0 and 0.79.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Installer
### Steps to reproduce
I recently have been uninstalling apps I don't use to save memory.
I found 3 Powertoys apps, which I thought was strange, so I uninstalled the newest version and then tried to uninstall the other 2 older versions (0.74.0 and 0.79.0).
### ✔️ Expected Behavior
I was expecting the apps to uninstall.
### ❌ Actual Behavior
The newest version had a visible icon, while the others do not. The newest version uninstalled, but the other older versions (0.74.0 and 0.79.0) did not.
The Windows message after trying to uninstall 0.74.0 is this:
Windows cannot find 'C:\ProgramData\PackageCache\(96cbcdad...'

### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,652,637,117 | next.js | VSCode debugging error when using Yarn PnP with NextJS 15 | ### Link to the code that reproduces this issue
https://github.com/IsaacAndela/nextjs-yarn-vscode-debug-error
### To Reproduce
1. Create a new NextJS app with Yarn `yarn dlx create-next-app@canary` and answer the default to all questions.
2. This should result in `.pnp.cjs`, `.pnp.loader.mjs` and `.yarn` in the root of the new project. If not, run `yarn install`.
3. Open the new project in Visual Studio Code.
4. Create a `.vscode/launch.json` file with the following content:
```jsonc
{
"version": "0.2.0",
"configurations": [
{
// Basicly the same server side debugging configuration as the official documentation:
// https://nextjs.org/docs/app/building-your-application/configuring/debugging#debugging-with-vs-code
"name": "Type: Node Terminal",
"type": "node-terminal",
"request": "launch",
"command": "yarn run dev"
},
{
// An alternative way to configure the debugging.
// This doesn't work either with NextJS 15.
"name": "Type: Node",
"request": "launch",
"runtimeExecutable": "yarn",
"runtimeArgs": ["run", "dev"],
"type": "node",
// This is only here to make debugging easier
// The error can also be reproduced without the
// integratedTerminal
"console": "integratedTerminal"
}
]
}
```
5. Run `yarn run dev` in the terminal. This should work fine.
6. In VSCode run `Debug: Select and Start Debugging` from the Command Palette (`cmd+shift+p` or `ctrl-shift-p`) and choose either launch configuration.
7. This should result in the following error:
```
node:internal/modules/cjs/loader:1252
throw err;
Error: Cannot find module '/Users/me/nextjs-yarn-vscode-debug-error/.pnp.cjs /Users/me/Applications/Visual Studio Code.app/Contents/Resources/app/extensions/ms-vscode.js-debug/src/bootloader.js'
Require stack:
- internal/preload
at Function._resolveFilename (node:internal/modules/cjs/loader:1249:15)
at Function._load (node:internal/modules/cjs/loader:1075:27)
at TracingChannel.traceSync (node:diagnostics_channel:315:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:218:24)
at Module.require (node:internal/modules/cjs/loader:1340:12)
at node:internal/modules/cjs/loader:1824:12
at loadPreloadModules (node:internal/process/pre_execution:729:5)
at setupUserModules (node:internal/process/pre_execution:207:5)
at prepareExecution (node:internal/process/pre_execution:160:5)
at prepareMainThreadExecution (node:internal/process/pre_execution:55:10) {
code: 'MODULE_NOT_FOUND',
requireStack: [ 'internal/preload' ]
}
```
### Current vs. Expected behavior
I expect to be able to debug my NextJS 15 application in VSCode using Yarn PnP.
However this results in the following error on startup:
```
node:internal/modules/cjs/loader:1252
throw err;
Error: Cannot find module '/Users/me/nextjs-yarn-vscode-debug-error/.pnp.cjs /Users/me/Applications/Visual Studio Code.app/Contents/Resources/app/extensions/ms-vscode.js-debug/src/bootloader.js'
Require stack:
- internal/preload
at Function._resolveFilename (node:internal/modules/cjs/loader:1249:15)
at Function._load (node:internal/modules/cjs/loader:1075:27)
at TracingChannel.traceSync (node:diagnostics_channel:315:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:218:24)
at Module.require (node:internal/modules/cjs/loader:1340:12)
at node:internal/modules/cjs/loader:1824:12
at loadPreloadModules (node:internal/process/pre_execution:729:5)
at setupUserModules (node:internal/process/pre_execution:207:5)
at prepareExecution (node:internal/process/pre_execution:160:5)
at prepareMainThreadExecution (node:internal/process/pre_execution:55:10) {
code: 'MODULE_NOT_FOUND',
requireStack: [ 'internal/preload' ]
}
```
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64
Available memory (MB): 32768
Available CPU cores: 12
Binaries:
Node: 22.11.0
npm: 10.9.0
Yarn: 4.5.1
pnpm: N/A
Relevant Packages:
next: 15.0.4-canary.6
eslint-config-next: N/A
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
It worked fine in NextJS 14 | bug | low | Critical |
2,652,650,949 | PowerToys | Automatic Do-Not-Disturb During Zoom / Teams Meetings | ### Description of the new feature / enhancement
Currently, there is not an option in Windows to automatically turn on Do Not Disturb during Zoom and Microsoft Teams conference calls. Emails, chats, notifications during conference calls are very disruptive. With many conference calls throughout the day, it's a paint to turn on/off DND
### Scenario when this would be used?
During online meeting / conference calls
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,652,656,977 | rust | Tracking Issue for thread spawn hooks | Feature gate: `#![feature(thread_spawn_hook)]`
This is a tracking issue for thread spawn hooks as proposed in https://github.com/rust-lang/rfcs/pull/3642
### Public API
```rust
// std::thread:
pub fn add_spawn_hook<F, G>(hook: F)
where
F: 'static + Send + Sync + Fn(&Thread) -> G,
G: 'static + Send + FnOnce();
impl Builder {
pub fn no_hooks(mut self) -> Builder;
}
```
### Steps / History
- [x] RFC: https://github.com/rust-lang/rfcs/pull/3642
- [x] Implementation: https://github.com/rust-lang/rust/pull/125405
- [ ] Final comment period (FCP)
- [ ] Stabilization PR
### Unresolved Questions
- Should the return value of the hook be an Option, for when the hook does not require any code to be run in the child?
- Should the hook be able to access/configure more information about the child thread? E.g. set its stack size. | T-libs-api,A-thread-locals,C-tracking-issue,A-thread | low | Minor |
2,652,664,851 | vscode | Unexpected indentation | There is unexpected indentation on the first comment line:

from https://bsky.app/profile/hailey.at/post/3lahndapjdk2j | bug,editor-autoindent | low | Minor |
2,652,686,562 | langchain | DOC: Add Detailed Dependency Table to v0.2 Documentation for Better Compatibility | ### URL
https://python.langchain.com/docs/versions/v0_2/overview/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The current v0.2 LangChain documentation doesn’t include a structured dependency table, like the one in the v0.3 documentation. This makes it challenging for developers who want to keep their applications on v0.2 to understand which dependencies could cause compatibility issues or potential code breaks.
### Idea or request for content:
It would be helpful to include a dependency table in the v0.2 documentation, detailing recommended version constraints for each package. This addition would allow developers to manage dependencies more effectively while maintaining compatibility with v0.2. | 🤖:docs | low | Minor |
2,652,710,827 | vscode | "rg" is consuming MAX CPU cycles |
Type: <b>Bug</b>
WIth this latest update, I am experiencing an issue where, after launching VSC and attaching to WSL - a program called "rg" gets started and is consuming > 900% CPU (I have a 14C/20T Intel i7 Part) - this basically "kills" the setup. I manually kill this until a new window is opened, then this fires off gain.
I suspect that this is tied to one of the parsers that is installed as an extension. I have modified the setup to ignore certain files in our setup (build artifacts) but this continues hog the system.
Thank you for your assistance.
VS Code version: Code 1.95.2 (e8653663e8840adaf45af01eab5c627a5af81807, 2024-11-07T11:07:22.054Z)
OS version: Windows_NT x64 10.0.22631
Modes:
Remote OS version: Linux x64 5.15.153.1-microsoft-standard-WSL2
Remote OS version: Linux x64 5.15.153.1-microsoft-standard-WSL2
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i7-13800H (20 x 2918)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.64GB (35.99GB free)|
|Process Argv|--crash-reporter-id 90370f8e-56c9-402a-9c1d-f300e5e4e3ea|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|WSL: Ubuntu|
|OS|Linux x64 5.15.153.1-microsoft-standard-WSL2|
|CPUs|13th Gen Intel(R) Core(TM) i7-13800H (20 x 0)|
|Memory (System)|47.05GB (42.10GB free)|
|VM|0%|
|Item|Value|
|---|---|
|Remote|Container ghcr.io/etn-ccis/brtlr-edge/linux-builder-blel-dev:ubuntu-20.04 (pxred-kvm-gateway-full)|
|OS|Linux x64 5.15.153.1-microsoft-standard-WSL2|
|CPUs|13th Gen Intel(R) Core(TM) i7-13800H (20 x 0)|
|Memory (System)|47.05GB (42.10GB free)|
|VM|0%|
</details><details><summary>Extensions (13)</summary>
Extension|Author (truncated)|Version
---|---|---
better-cpp-syntax|jef|1.27.1
autoconf|mae|0.2.0
jupyter-keymap|ms-|1.1.2
remote-containers|ms-|0.388.0
remote-ssh|ms-|0.115.0
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
brackets-keybindings|ms-|0.1.1
notepadplusplus-keybindings|ms-|1.0.7
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
flexlm-license-file|sek|0.3.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
01bff139:31013167
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
```
</details>
<!-- generated by issue reporter --> | info-needed,performance | low | Critical |
2,652,759,982 | pytorch | [DDP + Dynamo] Failed to get compiled graphs | ### 🐛 Describe the bug
There is a popular method of getting compiled graphs using a custom backend.
E.g.
```
def my_backend(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
print("graph:")
gm.graph.print_tabular()
return gm.forward
model = nn.Linear(10, 10)
model = torch.compile(model, backend=my_backend)
```
However, when I use the DDP module, this backend is not called (without the DDP everything works fine).
Moreover, most of the information when running with the TORCH_COMPILE_DEBUG=1 variable is not displayed when using the DDP.
My code:
```
def my_backend(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
print('graph:')
gm.graph.print_tabular()
return gm.forward
def linear(rank, world_size, model):
dist.init_process_group("gloo", rank=rank, world_size=world_size)
inputs = torch.randn(20, 100)
labels = torch.randn(20, 100)
model = model.to(rank)
model = DDP(model, device_ids=[rank])
model = torch.compile(model, backend=my_backend)
loss_fn = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
for i in range(2):
outputs = model(inputs.to(rank))
labels = labels.to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
def main():
model = nn.Sequential(nn.Linear(100, 100), nn.Linear(100, 100))
world_size = 2
mp.spawn(linear,
args=(world_size, model),
nprocs=world_size,
join=True)
if __name__=="__main__":
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29500"
main()
```
Output:
```
> python3 ddp_compile.py
[rank0]:W1112 19:38:21.277000 139076842178368 torch/_logging/_internal.py:1034] [0/0] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored
[rank1]:W1112 19:38:21.289000 133097161074496 torch/_logging/_internal.py:1034] [0/0] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignore
```
I guess this is incorrect behavior.
Anyway, is there any other way to get compiled graphs?
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) w5-2465X
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 8
CPU max MHz: 4700,0000
CPU min MHz: 800,0000
BogoMIPS: 6192.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 32 MiB (16 instances)
L3 cache: 33,8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] torchviz==0.0.2
[pip3] triton==3.0.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.5.39 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.6.82 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.8 py312h5eee18b_0
[conda] mkl_random 1.2.4 py312hdb19cb5_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] pytorch 2.4.0 py3.12_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.0 py312_cu121 pytorch
[conda] torchtriton 3.0.0 py312 pytorch
[conda] torchvision 0.19.0 py312_cu121 pytorch
[conda] torchviz 0.0.2 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @ezyang | oncall: distributed,triaged,oncall: pt2,module: dynamo,pt2d-triage-nov2024 | low | Critical |
2,652,766,412 | rust | private-intra-doc-links incorrectly shows error for public type alias to private wrapper type | ```rust
pub struct WholeNumber(u32);
pub use bar::Integer;
mod foo {
// This is private
pub struct Signed<T>(pub T);
}
mod bar {
/// If you want to do things, try [`Integer::do_thing()`]
pub type Integer = crate::foo::Signed<crate::WholeNumber>;
impl Integer {
pub fn do_thing() {}
}
}
```
Shows
```
warning: public documentation for `Integer` links to private item `Integer::do_thing`
--> src/lib.rs:10:41
|
10 | /// If you want to do things, try [`Integer::do_thing()`]
| ^^^^^^^^^^^^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
= note: `#[warn(rustdoc::private_intra_doc_links)]` on by default
```
under `cargo doc`.
This is (a) incorrect: `Integer::do_thing()` is public and accessible even if `Signed` isn't and (b) misleading: the problem is not the publicness of `do_thing()` but rather the fact that `Signed` is inaccessible. The diagnostic is confusing and doesn't help fix the problem.
I think this should probably be considered a false positive, but either way, the diagnostic ought to be be clearer about what needs to be fixed here. | T-rustdoc,C-bug,A-intra-doc-links | low | Critical |
2,652,780,004 | pytorch | Code fails with "Expected curr_block->next == nullptr to be true, but got false" | ### 🐛 Describe the bug
We use torch compile with `reduce-overhead`, and after upgrading to torch 2.5.1 from 2.4.1 it started to fail with:
```
File "impl.py", line 406, in forward
position_ids=self.runtime.compile(_calc_position_ids)(
File "runtime.py", line 45, in _fn
result = _compiled(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "impl.py", line 589, in _calc_position_ids
def _calc_position_ids(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1100, in forward
return compiled_fn(full_args)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 321, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 667, in inner_fn
outs = compiled_fn(args)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 488, in wrapper
return compiled_fn(runtime_args)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/codecache.py", line 1478, in __call__
return self.current_callable(inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py", line 1008, in run
return compiled_fn(new_inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/cudagraph_trees.py", line 398, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/cudagraph_trees.py", line 428, in cudagraphify
return manager.add_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/cudagraph_trees.py", line 2213, in add_function
return fn, fn(inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/cudagraph_trees.py", line 1919, in run
out = self._run(new_inputs, function_id)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/cudagraph_trees.py", line 2021, in _run
self.apply_checkpoint_execution_state_in_allocator()
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/cudagraph_trees.py", line 2415, in apply_checkpoint_execution_state_in_allocator
torch._C._cuda_setCheckpointPoolState(
RuntimeError: Expected curr_block->next == nullptr to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
I tried to stop compiling this particular method and it started to fail with other compiled method with same error.
It didn't reproduce when I changed mode from `reduce-overhead` to `max-autotune`.
We also supply `fullgraph=True` and `dynamic=False`
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H200
Nvidia driver version: 550.90.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 44
On-line CPU(s) list: 0-43
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 44
Stepping: 1
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.8 MiB (44 instances)
L1i cache: 2.8 MiB (44 instances)
L2 cache: 22 MiB (44 instances)
L3 cache: 704 MiB (44 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-43
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.5.2
[pip3] torch==2.5.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @mcarilli @ezyang @eellison @penguinwu @chauhang | needs reproduction,triaged,module: cuda graphs,oncall: pt2 | low | Critical |
2,652,795,632 | langchain | TokenTextSplitter not loading up HF tokenizer from `.from_huggingface_tokenizer()`; using `gpt2` instead | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_text_splitters import TokenTextSplitter
tts = TokenTextSplitter(chunk_size=256, chunk_overlap=0).from_huggingface_tokenizer(tokenizer)
print(tts._tokenizer)
# output: <Encoding 'gpt2'>
tts = TokenTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=256, chunk_overlap=0)
print(tts._tokenizer)
# output: <Encoding 'gpt2'>
tts = tts.from_huggingface_tokenizer(tokenizer)
tts._tokenizer
# output: <Encoding 'gpt2'>
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* It should show Huggingface Tokenizer
* It should use the Huggingface tokenizer, instead of the GPT2 tokenizer
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.16
> langchain: 0.3.7
> langchain_community: 0.3.6
> langsmith: 0.1.139
> langchain_experimental: 0.3.3
> langchain_huggingface: 0.1.2
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> huggingface-hub: 0.24.7
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.11
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.2.1
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tokenizers: 0.19.1
> transformers: 4.44.2
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,652,813,963 | ui | [bug]: Skeleton Animation not working (animate-pulse) | ### Describe the bug
Skeleton is not animated, it is mentioned also in this other [issue](https://github.com/shadcn-ui/ui/issues/758). It shows as pulse keyframes are not defined. I fixed it by adding the pulse keyframe in `tailwind.config.ts`
```
keyframes: {
pulse: {
"50%": {
opacity: "0.5",
},
},
},
```
### Affected component/components
Skeleton
### How to reproduce
Install and use Skeleton component
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
"next": "15.0.3"
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,652,885,534 | TypeScript | slow paste with spinner | I have default paste settings on in insiders, but see a spinner every once in a while, for particular snippets of code
https://github.com/user-attachments/assets/59d36cd4-4cd8-4a00-9d60-aff32c7d86ac
| Needs Investigation | low | Major |
2,652,901,289 | ui | [bug]: cli tries to use yarn when I use pnpm | ### Describe the bug
```
pnpm dlx shadcn@latest add drawer
✔ Checking registry.
⠋ Installing dependencies.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Command failed with exit code 1: yarn add vaul @radix-ui/react-dialog
'yarn' is not recognized as an internal or external command,
operable program or batch file.
```
same happened with the init command too.
### Affected component/components
*
### How to reproduce
1. `npx shadcn@latest init -d`
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
i don't have yarn installed.
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,652,921,659 | PowerToys | Data folder location | ### Description of the new feature / enhancement
Powertoys is a welcome addition. However it adds 2 folders to the \Documents folder, whether we want them there or not. Can you please make it possible for the user to specify where they are located?
### Scenario when this would be used?
It's important to me because of the way I copy stuff to my backups. Also, too many others do the same thing.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,652,946,063 | flutter | Cupertino dialog horizontal divider is not showing correctly in dark mode | Regression introduced by #150410
The horizontal divider that separates the body of the Cupertino dialog from the actions is not correctly picking up the right color in dark mode, so it appears darker than it should. The dividers between different actions are showing up correctly.
As shown here:

The left image is a native dialog, the right is a CupertinoAlertDialog on master.
More details [here](https://github.com/flutter/flutter/pull/157218#issuecomment-2466747739). | c: regression,framework,a: fidelity,f: cupertino,P2,team-design,triaged-design,f: theming | low | Minor |
2,652,952,513 | rust | add hint for E0401 when defining a const inside a func | ### Code
```Rust
struct Foo<const N: usize>;
impl<const N: usize> Foo<N> {
fn get_n_plus_one() -> usize {
const M: usize = N + 1;
M
}
}
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0401]: can't use generic parameters from outer item
--> src/lib.rs:5:26
|
3 | impl<const N: usize> Foo<N> {
| - const parameter from outer item
4 | fn get_n_plus_one() -> usize {
5 | const M: usize = N + 1;
| ^ use of generic parameter from outer item
|
= note: a `const` is a separate item from the item that contains it
```
### Desired output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0401]: can't use generic parameters from outer item
--> src/lib.rs:5:26
|
3 | impl<const N: usize> Foo<N> {
| - const parameter from outer item
4 | fn get_n_plus_one() -> usize {
5 | const M: usize = N + 1;
| ^ use of generic parameter from outer item
|
= note: a `const` is a separate item from the item that contains it
help: associated constants and let statements can use const paramaters, consider using one of those.
```
### Rationale and extra context
_No response_
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.84.0-nightly (759e07f06 2024-10-30)
binary: rustc
commit-hash: 759e07f063fb8e6306ff1bdaeb70af56a878b415
commit-date: 2024-10-30
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,652,954,570 | material-ui | [TextField] Cursor jumps to beginning when clicking outside the default line-height area | ### Steps to reproduce
Link to live example: (required)
Go to the [TextField demo page on MUI’s website](https://mui.com/material-ui/react-text-field/).
Steps:
1. Type some text into any TextField component.
2. Click above the input area, especially outside the text’s line-height area (e.g., towards the top of the field).
https://github.com/user-attachments/assets/b8534f17-d429-49e1-9c82-951db7292223
3 .Observe the cursor behavior—it should ideally position itself where you clicked but instead jumps to the beginning of the text. Unless you click in the small native input line height.
### Current behavior
When using the TextField component, clicking within the input area but outside the default line-height (e.g., in the top half of the input area) causes the cursor to jump to the beginning of the text instead of positioning where the click occurred. This behavior causes the input field to respond in an unexpected and inconsistent way for users.
### Expected behavior
The cursor should appear at the exact location where the user clicks within the input area, aligning with standard input field behavior and enhancing user experience.
### Context
This issue affects the user experience by causing unexpected cursor jumps within the input area. It occurs even with the default TextField styling, making it a general issue in applications that use MUI’s TextField component.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 14.6.1
Binaries:
Node: 20.16.0 - /opt/homebrew/opt/node@20/bin/node
npm: 10.8.1 - /opt/homebrew/opt/node@20/bin/npm
pnpm: Not Found
Browsers:
Chrome: 130.0.6723.117
Edge: Not Found
Safari: 17.6
npmPackages:
@emotion/react: ^11.11.3 => 11.11.3
@emotion/styled: ^11.11.0 => 11.11.0
@mui/base: 5.0.0-beta.61
@mui/core-downloads-tracker: 6.1.6
@mui/icons-material: ^6.1.6 => 6.1.6
@mui/lab: ^6.0.0-beta.14 => 6.0.0-beta.14
@mui/material: ^6.1.6 => 6.1.6
@mui/private-theming: 6.1.6
@mui/styled-engine: 6.1.6
@mui/system: ^6.1.6 => 6.1.6
@mui/types: 7.2.19
@mui/utils: 5.15.9
@mui/x-charts: ^6.18.7 => 6.18.7
@mui/x-data-grid: ^6.18.4 => 6.18.6
@types/react: 18.3.12
react: ^18.2.0 => 18.2.0
react-dom: ^18.2.0 => 18.2.0
typescript: 4.9.5
```
</details>
**Search keywords**: TextField cursor placement , TextField line-height issue, TextField cursor misalignment, MUI TextField cursor jump | external dependency,component: text field,browser: Safari | low | Minor |
2,652,979,268 | react | Align on HTML attribute/property casing (ie not camelCase) | ## Summary
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
I am cross-posting an issue I made with Preact and Voby
https://github.com/preactjs/preact/issues/4555
https://github.com/vobyjs/voby/issues/45 | Type: Discussion | low | Critical |
2,652,992,898 | rust | ICE: `Normalization of 'ty::ConstKind::Expr' is unimplemented` | <!--
[31mICE[0m: Rustc ./a.rs '-Zcrate-attr=feature(unsized_const_params) --crate-type=lib -ooutputfile -Zdump-mir-dir=dir' 'error: internal compiler error: Unevaluated `ty::Const` in MIR body', 'error: internal compiler error: Unevaluated `ty::Const` in MIR body'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
//@compile-flags: --crate-type=lib
#![feature(unsized_const_params)]
#![feature(adt_const_params, const_ptr_read, generic_const_exprs)]
use std::mem::ManuallyDrop;
const fn concat_strs<const A: &'static str, const B: &'static str>() -> &'static str
where
[(); A.len()]:,
[(); B.len()]:,
[(); A.len() + B.len()]:,
{
#[repr(C)]
struct ConcatJoin<const N: usize, const M: usize> {
left: [u8; N],
right: [u8; M],
}
#[repr(C)]
union ConcatJoiner<const N: usize, const M: usize>
where
[(); N + M]:,
{
whole: ManuallyDrop<[u8; N + M]>,
split: ManuallyDrop<ConcatJoin<N, M>>,
}
const fn concat_arr<const M: usize, const N: usize>(a: [u8; M], b: [u8; N]) -> [u8; M + N] {
unsafe {
let joiner = ConcatJoiner {
split: ManuallyDrop::new(ConcatJoin { left: a, right: b }),
};
let join = joiner.whole;
ManuallyDrop::into_inner(join)
}
}
struct Inner<const A: &'static str, const B: &'static str>;
impl<const A: &'static str, const B: &'static str> Inner<A, B>
where
[(); A.len()]:,
[(); B.len()]:,
[(); A.len() + B.len()]:,
{
const ABSTR: &'static str = unsafe {
std::str::from_utf8_unchecked(&concat_arr(
A.as_ptr().cast().read(),
B.as_ptr().cast().read(),
))
};
}
Inner::<A, B>::ABSTR
}
const FOO: &str = "foo";
const BAR: &str = "bar";
const FOOBAR: &str = concat_strs::<FOO, BAR>();
````
<details><summary><strong>original code</strong></summary>
<p>
original:
````rust
#![allow(incomplete_features)]
#![feature(adt_const_params, const_ptr_read, generic_const_exprs)]
use std::mem::ManuallyDrop;
const fn concat_strs<const A: &'static str, const B: &'static str>() -> &'static str
where
[(); A.len()]:,
[(); B.len()]:,
[(); A.len() + B.len()]:,
{
#[repr(C)]
struct ConcatJoin<const N: usize, const M: usize> {
left: [u8; N],
right: [u8; M],
}
#[repr(C)]
union ConcatJoiner<const N: usize, const M: usize>
where
[(); N + M]:,
{
whole: ManuallyDrop<[u8; N + M]>,
split: ManuallyDrop<ConcatJoin<N, M>>,
}
const fn concat_arr<const M: usize, const N: usize>(a: [u8; M], b: [u8; N]) -> [u8; M + N]
where
[(); M + N]:,
{
unsafe {
let joiner = ConcatJoiner {
split: ManuallyDrop::new(ConcatJoin { left: a, right: b }),
};
let join = joiner.whole;
ManuallyDrop::into_inner(join)
}
}
struct Inner<const A: &'static str, const B: &'static str>;
impl<const A: &'static str, const B: &'static str> Inner<A, B>
where
[(); A.len()]:,
[(); B.len()]:,
[(); A.len() + B.len()]:,
{
const ABSTR: &'static str = unsafe {
std::str::from_utf8_unchecked(&concat_arr(
A.as_ptr().cast::<[u8; A.len()]>().read(),
B.as_ptr().cast::<[u8; B.len()]>().read(),
))
};
}
Inner::<A, B>::ABSTR
}
const FOO: &str = "foo";
const BAR: &str = "bar";
const FOOBAR: &str = concat_strs::<FOO, BAR>();
````
</p>
</details>
Version information
````
rustc 1.84.0-nightly (6503543d1 2024-11-12)
binary: rustc
commit-hash: 6503543d11583d1686d4989847b2afbec8d9fdba
commit-date: 2024-11-12
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zcrate-attr=feature(unsized_const_params) --crate-type=lib`
<details><summary><strong>Program output</strong></summary>
<p>
```
warning: the feature `generic_const_exprs` is incomplete and may not be safe to use and/or cause compiler crashes
--> /tmp/icemaker_global_tempdir.M7QGlwHPHRjj/rustc_testrunner_tmpdir_reporting.Pgzdb9IPCHkV/mvce.rs:1:46
|
1 | #![feature(adt_const_params, const_ptr_read, generic_const_exprs)]
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #76560 <https://github.com/rust-lang/rust/issues/76560> for more information
= note: `#[warn(incomplete_features)]` on by default
warning: the feature `unsized_const_params` is incomplete and may not be safe to use and/or cause compiler crashes
--> <crate attribute>:1:9
|
1 | feature(unsized_const_params)
| ^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #95174 <https://github.com/rust-lang/rust/issues/95174> for more information
warning: the feature `const_ptr_read` has been stable since 1.71.0 and no longer requires an attribute to enable
--> /tmp/icemaker_global_tempdir.M7QGlwHPHRjj/rustc_testrunner_tmpdir_reporting.Pgzdb9IPCHkV/mvce.rs:1:30
|
1 | #![feature(adt_const_params, const_ptr_read, generic_const_exprs)]
| ^^^^^^^^^^^^^^
|
= note: `#[warn(stable_features)]` on by default
warning: type annotations needed
--> /tmp/icemaker_global_tempdir.M7QGlwHPHRjj/rustc_testrunner_tmpdir_reporting.Pgzdb9IPCHkV/mvce.rs:45:35
|
45 | A.as_ptr().cast().read(),
| ^^^^
|
= warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2018!
= note: for more information, see issue #46906 <https://github.com/rust-lang/rust/issues/46906>
= note: `#[warn(tyvar_behind_raw_pointer)]` on by default
warning: type annotations needed
--> /tmp/icemaker_global_tempdir.M7QGlwHPHRjj/rustc_testrunner_tmpdir_reporting.Pgzdb9IPCHkV/mvce.rs:46:35
|
46 | B.as_ptr().cast().read(),
| ^^^^
|
= warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2018!
= note: for more information, see issue #46906 <https://github.com/rust-lang/rust/issues/46906>
warning: function `concat_strs` is never used
--> /tmp/icemaker_global_tempdir.M7QGlwHPHRjj/rustc_testrunner_tmpdir_reporting.Pgzdb9IPCHkV/mvce.rs:5:10
|
5 | const fn concat_strs<const A: &'static str, const B: &'static str>() -> &'static str
| ^^^^^^^^^^^
|
= note: `#[warn(dead_code)]` on by default
warning: associated constant `ABSTR` is never used
--> /tmp/icemaker_global_tempdir.M7QGlwHPHRjj/rustc_testrunner_tmpdir_reporting.Pgzdb9IPCHkV/mvce.rs:43:15
|
37 | / impl<const A: &'static str, const B: &'static str> Inner<A, B>
38 | | where
39 | | [(); A.len()]:,
40 | | [(); B.len()]:,
41 | | [(); A.len() + B.len()]:,
| |_________________________________- associated constant in this implementation
42 | {
43 | const ABSTR: &'static str = unsafe {
| ^^^^^
warning: constant `FOO` is never used
--> /tmp/icemaker_global_tempdir.M7QGlwHPHRjj/rustc_testrunner_tmpdir_reporting.Pgzdb9IPCHkV/mvce.rs:54:7
|
54 | const FOO: &str = "foo";
| ^^^
warning: constant `BAR` is never used
--> /tmp/icemaker_global_tempdir.M7QGlwHPHRjj/rustc_testrunner_tmpdir_reporting.Pgzdb9IPCHkV/mvce.rs:55:7
|
55 | const BAR: &str = "bar";
| ^^^
warning: constant `FOOBAR` is never used
--> /tmp/icemaker_global_tempdir.M7QGlwHPHRjj/rustc_testrunner_tmpdir_reporting.Pgzdb9IPCHkV/mvce.rs:56:7
|
56 | const FOOBAR: &str = concat_strs::<FOO, BAR>();
| ^^^^^^
warning: 10 warnings emitted
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: Unevaluated `ty::Const` in MIR body
|
= note: delayed at /rustc/6503543d11583d1686d4989847b2afbec8d9fdba/compiler/rustc_middle/src/mir/consts.rs:328:40 - disabled backtrace
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.84.0-nightly (6503543d1 2024-11-12) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z crate-attr=feature(unsized_const_params) --crate-type lib -Z dump-mir-dir=dir
query stack during panic:
end of query stack
```
</p>
</details>
<!--
query stack:
-->
@rustbot label +F-adt_const_params +F-const_ptr_read +F-generic_const_exprs +F-unsized_const_params | I-ICE,T-compiler,C-bug,A-const-generics,F-generic_const_exprs,S-bug-has-test | low | Critical |
2,652,993,816 | deno | Deno fmt over big files | eslint.config.mjs package-lock.json
~/papernet main ?:1 userland@localhost 18:31:47··❯ deno fmt
Error formatting: /home/userland/papernet/views/index.html
Syntax error (expected close tag) at file:///home/userland/papernet/views/index.html:32:0
/home/userland/papernet/.github/workflows/go.yml
/home/userland/papernet/views/admin.html
/home/userland/papernet/README.md
/home/userland/papernet/public/css/library.css
/home/userland/papernet/public/css/download.css
/home/userland/papernet/public/css/style.css
/home/userland/papernet/eslint.config.mjs
/home/userland/papernet/views/components/header.html
/home/userland/papernet/public/js/blob.js
/home/userland/papernet/views/components/searchResults.html
/home/userland/papernet/views/components/download.html
/home/userland/papernet/views/components/books.html
/home/userland/papernet/views/layouts/mainLayout.html
/home/userland/papernet/public/js/main.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/CSSRulePlugin.min.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/ScrollToPlugin.min.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/TextPlugin.min.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/EaselPlugin.min.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/PixiPlugin.min.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/Observer.min.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/CustomEase.min.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/Flip.min.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/MotionPathPlugin.min.js
/home/userland/papernet/public/cdn_modules/htmx@2.0.1/htmx.min.js
/home/userland/papernet/public/lib/htmx.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/Draggable.min.js
/home/userland/papernet/public/cdn_modules/gsap@3.12.5/ScrollTrigger.min.js
============================================================
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: linux aarch64
Version: 2.0.4
Args: ["deno", "fmt"]
thread 'tokio-runtime-worker' panicked at cli/tools/fmt.rs:792:11:
Formatting not stable. Bailed after 5 tries. This indicates a bug in the formatter where it formats the file (/home/userland/papernet/public/cdn_modules/gsap@3.12.5/gsap.min.js) differently each time. As a temporary workaround you can ignore this file.
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
~/papernet ✘ 1 4s userland@localhost 18:32:05······❯ | deno fmt,needs investigation | low | Critical |
2,653,037,844 | terminal | Pane focus is lost after pane swap | ### Windows Terminal version
1.21.2911.0
### Windows build number
10.0.22631.0
### Other Software
_No response_
### Steps to reproduce
Create a split pane, use keybinds to swap between them.
### Expected Behavior
Pane shouldn't lose focus.
### Actual Behavior
Pane loses focus, Terminal doesn't respect any inputs unless the window itself is refocused, here is visual example. I was able to reproduce it with empty $PROFILE (video recording wasn't though):
https://github.com/user-attachments/assets/08cb98dc-72cc-4c23-962e-f909668c184c
Video description:
- I swap panes twice, lose focus each time, click back to regain focus.
- I change my active pane to show it works correctly without losing focus.
- I alt-tab, swap pane again, same behavior.
- I alt-tab immediately after losing focus, now swapping panes suddenly working without losing focus.
- I change my active pane.
- I swap, and now I lose the focus.
- I do showcase that last one with alt-tab multiple times.
So my uneducated guess is there are two different focuses in play there, usual Windows window focus and Terminal's pane focus. So, clicking on terminal or changing panes via keybind affects pane focus state, which sets the state in a way to cause unexpected behavior. But alt-tabbing it after focus is lost sets that state in a different way (or doesn't set it at all), which allows behavior to work as expected until it is re-set back via clicking or pane change. | Issue-Bug,Area-UserInterface,Product-Terminal | low | Minor |
2,653,064,681 | go | proposal: testing/quick: deprecate package | ### Proposal Details
testing/quick has been frozen since #15557, [CL 31910](https://go.dev/cl/31910). It has, accordingly, not received much new development (primarily repo-wide updates).
Compared to other frozen packages, there aren't that many users, pkg.go.dev only lists 335 importers https://pkg.go.dev/testing/quick
There's also a clear replacement we can point to, `testing`'s [built in fuzz support](https://pkg.go.dev/testing#hdr-Fuzzing) introduced in Go 1.18.
I propose we mark the package as deprecated, and point users towards fuzzing. | Proposal | low | Major |
2,653,082,429 | pytorch | Let torch.compiler.allow_in_graph work in more situations | ### 🐛 Describe the bug
@Chillee and I were discussing what it would take to get `allow_in_graph` to work in more situations. Here is an accounting of some situations which we thought of:
* NN module input https://github.com/pytorch/pytorch/issues/138786
* Local functions that close over local variables
* Methods that take in some object, potentially temporary, as a self argument
In general, there's a hierarchy, where some things are easier to deal with and some are harder to deal with:
* Accesses to global variables are "easy" to deal with, because we can just feed the real global variable directly into the function when we do tracing. The challenge here is entirely around ensuring tensor accesses get appropriately mapped to FX inputs (because parameters get lifted into arguments into the FX graph)
* Things are a bit trickier if there is a mutation of the global variable. Then if we give the direct global variables they will have the un-mutated version of the variable. We can either just YOLO this situation or try to do something more clever (see below).
* If you have an input that corresponds to some temporary object, life is harder, because we have to somehow reconstruct the object to pass into the function. When the temporary object is disposable (you can just create it on the fly to pass in, as we do for pytree). If the object is not reconstructible, you need to somehow construct a proxy object which emulates the original object as much as possible
* If you have an input which is a *closed over* variable for a temporary closure, you need to somehow create a closure on the fly with the inputs you wanted to stub in. This has to be done recursively for any closures that themselves may be closed over (you can stop this process when you get to globals)
### Versions
master
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,653,138,502 | transformers | Gemma2: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) | ### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-5.15.0-1052-oracle-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.0+cu124 (True)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model_id = 'google/gemma-2-2b'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = "Any Context"
input_ids = tokenizer.encode(messages, return_tensors="pt").to("cuda")
gen_tokens = model(input_ids)
```
### Expected behavior
```
Loading checkpoint shards: 100%|██████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.09s/it]
Traceback (most recent call last):
File "/host/ckpts/transformers/script.py", line 34, in <module>
gen_tokens = model(input_ids)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/host/ckpts/transformers/src/transformers/models/gemma2/modeling_gemma2.py", line 1052, in forward
outputs = self.model(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/host/ckpts/transformers/src/transformers/models/gemma2/modeling_gemma2.py", line 785, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/sparse.py", line 190, in forward
return F.embedding(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2551, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
```
If set one gpu visible, no error | bug,Big Model Inference,Accelerate | low | Critical |
2,653,139,079 | godot | The Image stored in the emission_point_texture of a ParticleProcessMaterial changes id on save when Creating Emission Points From Node | ### Tested versions
- Reproducible: v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1060 6GB (NVIDIA; 32.0.15.6109) - Intel(R) Core(TM) i5-4690 CPU @ 3.50GHz (4 Threads)
### Issue description
When using a **GPUParticle3D** node and having it's emission points created from another node, the _emission_point_texture_ property inside its **ParticleProcessMaterial** will have a new **ImageTexture** assigned. From that point on, every time the scene is opened and saved, the _image_ property inside that **ImageTexture** will be regenerated with a new id. While this does not lead to any behavior issues, this means that scene will now be dirty and will be marked as an unstaged change on version control systems such as git

### Steps to reproduce
**With MRP**
- On a terminal, run `git status` and verify that there are no unstaged changes
- Open the Minimal Reproduction Project
- Open the scene.tscn
- Hit Ctrl+S
- On a terminal, run `git status`
- Verify that scene.tscn has unstaged changes
**Without MRP**
- Create a new scene
- Create a GPUParticle3D node with an empty **ParticleProcessMaterial** and any mesh for it's _draw_pass_1_
- Create a MeshInstance3D node with any mesh
- Select the GPUParticle3D and, from the menu over the scene preview, select GPUParticle3D > Create Emission Points From Node
- Pick the MeshInstance3D created above
- Save the scene to any new file
- Backup the saved scene or stage the file under a version control system such as git
- Close the scene in Godot
- Reopen the scene
- Hit CTRL+S
- Compare the newly saved scene with the backup from earlier
- Verify that the saved scene has unstaged changes
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/17721657/MRP.zip)
| bug,topic:editor | low | Minor |
2,653,149,847 | vscode | Program files /app/out/ folder being deleted nightly. |
Type: <b>Bug</b>
C:\Program Files\Microsoft VS Code\resources\app\out directory being deleted nightly for about 2 weeks now. After this happens, open VS Code windows are partially functional but new windows will not open, triggering this error instead.
A JavaScript error occurred in the main process
Uncaught Exception:
Error [ERR_MODULE_NOT_FOUND]: Cannot find module 'C:\Program Files\Microsoft VS
Code\resources\app\out\main.js' imported from C:\WINDOWS\system32\
at finalizeResolution (node:internal/modules/esm/resolve:265:11)
at moduleResolve (node:internal/modules/esm/resolve:940:10)
continued...
To fix it, I end all vscode processes in windows task manager, then re-run installer. I've noticed when closing the processes, there are a few vscode install/uninstall processes running when stuck in this state, so I'm not sure if this is due to some self-update going wrong.
Not sure if this is being caused by a VS code bug or my company's antivirus software. Any help would be appreciated.
VS Code version: Code 1.95.0 (912bb683695358a54ae0c670461738984cbb5b95, 2024-10-28T20:16:24.561Z)
OS version: Windows_NT x64 10.0.26100
Modes:
Remote OS version: Linux x64 6.8.0-48-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz (8 x 2995)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.74GB (1.81GB free)|
|Process Argv|--crash-reporter-id 81949ced-f38a-4b7b-a6fd-0494dce958d4|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: netbox-dev.jsinoc.com|
|OS|Linux x64 6.8.0-48-generic|
|CPUs|Intel(R) Xeon(R) Gold 6208U CPU @ 2.90GHz (4 x 0)|
|Memory (System)|7.67GB (2.34GB free)|
|VM|0%|
</details><details><summary>Extensions (14)</summary>
Extension|Author (truncated)|Version
---|---|---
codespaces|Git|1.17.3
cisco|jam|1.9.1
remote-containers|ms-|0.388.0
remote-ssh|ms-|0.115.0
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
remote-explorer|ms-|0.4.3
iosxr|phi|1.0.4
pdf|tom|1.2.2
vscode-docker|ms-|1.29.3
debugpy|ms-|2024.12.0
python|ms-|2024.18.1
vscode-pylance|ms-|2024.11.1
save-as-root|yy0|1.8.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
945dj816:31013170
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
j44ff735:31179530
```
</details>
<!-- generated by issue reporter --> | bug,install-update,mitigated | low | Critical |
2,653,152,202 | tauri | [bug] Logs in the devtools are blocked when application is maximized or set to fullscreen. | ### Describe the bug
I am building a `fullscreen` and `transparent` application with tauri2 and vue3.
when I was debuging, logs in the devtools were blocked like this:
https://github.com/user-attachments/assets/2ea7678a-4bea-4221-b753-76da9d90213d
Sometimes, warnings show up, which may be relevant.
```
[2024-11-12T19:04:36Z WARN tao::platform_impl::platform::event_loop::runner] NewEvents emitted without explicit RedrawEventsCleared
[2024-11-12T19:04:36Z WARN tao::platform_impl::platform::event_loop::runner] RedrawEventsCleared emitted without explicit MainEventsCleared
```
If I take devTools to secondary screen, everything's ok.
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
✔ WebView2: 130.0.2849.80
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.15.0
- pnpm: 9.12.3
- npm: 10.8.1
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,653,157,605 | deno | `deno install` with incorrect peer dependencies installs incompatible React versions | Version: Deno 2.0.6
So... I am migrating a legacy React Vite project from Node+Yarn to Deno.
When I run `deno install` followed by `deno run -A npm:vite serve`, the page crashes in the browser with a runtime error from React:
```
Error: Invalid hook call. Hooks can only be called inside of the body of a function component. This could happen for one of the following reasons:
1. You might have mismatching versions of React and the renderer (such as React DOM)
2. You might be breaking the Rules of Hooks
3. You might have more than one copy of React in the same app
See https://reactjs.org/link/invalid-hook-call for tips about how to debug and fix this problem.
```
I have narrowed this problem down to **multiple incompatible React versions** being installed by `deno install`. It installs 3 different versions of React:

The problem is actually that I have incorrect peer dependencies in my project. When I run `yarn` normally, it's full of peer dependency warnings. I understand this is a problem with my project, but it's not easy to fix. It's caused by some of my libraries having a peer-dependency on React 16 or 17, while my project has been upgraded to React 18 (which yarn allowed me to do).

If I use `yarn` to set up node_modules, and then use `deno run -A npm:vite serve`, it all works perfectly. So the problem is related to the behavior of `deno install` and not the Deno runtime itself.
As for the solution, I think Deno should at least warn about peer dependencies. I am not sure if there's a standard expected behavior among package managers about what to do in this scenario, or if there are any workarounds besides replacing these unmaintained packages with forks. (EDIT: I tried specifying [overrides](https://docs.npmjs.com/cli/v10/configuring-npm/package-json?v=true#overrides) to react 18 in package.json and it had no effect.)
For reference, this is the project: https://gitlab.com/soapbox-pub/soapbox
(I feel bad that Deno is burdened by this legacy Node crap, but I thought you would rather know than not know. :smiling_face_with_tear:) | bug,install | low | Critical |
2,653,160,896 | PowerToys | Fancy Zones | ### Microsoft PowerToys version
v0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones Editor
### Steps to reproduce
Edited shortcuts using keys Win, Shift, Alt, ctrl. No matter the combination, to save key remained grayed out. Was unable to save changes.
### ✔️ Expected Behavior
Save key to gray out due to editing shortcut.
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,653,195,544 | rust | rust_2024_incompatible_pat bad suggestion with proc-macro when brace comes from input | When a proc-macro generates code, and the opening brace of a pattern from that code comes from the input, then the compiler thinks that the pattern is a 2024 pattern, and thus enforces the new pattern rules using the local edition, not the edition from the proc-macro.
There are two consequences:
1. This is incompatible with supporting macros that are on different editions.
2. It generates an invalid suggestion.
Example of the bad suggestion might be:
```rust
#[derive(my_macro)]
struct S {
f1: i32
}
```
will give a suggestion to modify the struct to add an `&` which is invalid syntax like this:
```rust
struct &S {
f1: i32
}
```
This was seen with swc_macros_common [here](https://github.com/swc-project/swc/blob/ce20b4db8863a3c35beb113581cdf47c732c403c/crates/swc_macros_common/src/binder.rs#L205) where it uses the brace token from the input.
One option is to change this in the macro to generate a new token with the correct span information.
I'm also wondering if there are options for changing the ways tokens are processed from proc-macros to avoid this altogether.
### Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (81eef2d36 2024-11-11)
binary: rustc
commit-hash: 81eef2d362a6f03db6f8928f82d94298d31eb81b
commit-date: 2024-11-11
host: aarch64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
| T-compiler,C-bug,D-invalid-suggestion,D-edition,A-proc-macros,A-edition-2024,L-rust_2024_incompatible_pat,I-edition-triaged | low | Minor |
2,653,205,285 | deno | Allow `LD_*`, `DYLD_*` env vars to be set when launching a subprocess without full `--allow-run` if they equal the value on startup | When not using full `--allow-run` permissions, we should consider capturing `LD_*`, `DYLD_*` env vars on startup and only require full `--allow-run` permissions when these values don't equal what they were on startup.
Ref https://github.com/denoland/deno/issues/26839 | suggestion | low | Minor |
2,653,205,411 | godot | Godot can open the same file multiple times if capitalization changes resulting in confusing behavior | ### Tested versions
- Reproducible in: 4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6603) - AMD Ryzen 5 5600X 6-Core Processor (12 Threads)
### Issue description
Godot can open same file multiple times if the capitalization of it changes. For example "res://Folder/node.gd" and "res://folder/node.gd" or "res://folder/node.tscn" and "res://folder/Node.tscn".
Having both of them open at the same time causes confusing behavior if the user doesn't notice it. Problems arise especially if one of them is edited when both of them are open.
From quick test changing the capitalization of tscn file is worse than changing the capitalization of folder.
Changes made to one file are not always applied to the other file.
Godot will throw multiple of this warning:
`drivers/windows/file_access_windows.cpp:181 - Case mismatch opening requested file 'res://Folder/node.tscn', stored as 'res://Folder/Node.tscn' in the filesystem. This file will not open when exported to other case-sensitive platforms.`
Godot will also complain that files on disk have been modified when the two opened files are different and one of them gets saved.
It is not obvious from these that the solution is to close your currently open files and reopen them.
Image: Same scene is opened three times, and same script is opened twice at the same time.

Changing the name inside the editor doesn't cause the problem. But if you use git or similar then other people in the project may get problems when the names get changed.
### Steps to reproduce
1. Put a scene with script (.gd file) into a folder.
2. Open the script in Godot.
3. Close Godot (optional).
4. Change capitalization of the folder. For example "res://Folder/node.gd" into "res://folder/node.gd" or "res://folder/node.tscn" into "res://folder/Node.tscn". Changing the scene name gives different results.
5. Open Godot. Godot remembers that you had the file open and opens the file with the old capitalization.
6. Open the file normally. Now you have the file open with new capitalization.
7. You now have the same file open twice.
To cause real problems make edits on one of the files while both of them are open and run the scene.
### Minimal reproduction project (MRP)
Project with same file opened multiple times: [test_project.zip](https://github.com/user-attachments/files/17722237/test_project.zip)
| platform:windows,topic:editor,needs testing | low | Minor |
2,653,226,675 | material-ui | [TextField] Safari scrolls to top on password field focus with "filled" and "standard" variants | ### Steps to reproduce
Link to live example: (required)
https://github.com/user-attachments/assets/90dcfa48-2617-45bd-80f2-4c0dca1a46f3
Steps to Reproduce:
1. Go to the [TextField demo page on MUI’s website](https://mui.com/material-ui/react-text-field/).
2. Scroll to any TextField with type="password" and set to the “filled” or “standard” variant.
3. Click on the password field in Safari (on iOS 17 or macOS).
4. Observe that the page briefly scrolls to the top of the viewport as Safari displays the password suggestion prompt.
### Current behavior
In Safari, when focusing on a TextField with type="password" in either the “filled” or “standard” variant, the page flashes to the top as Safari’s password suggestion prompt appears. This behavior does not occur with the “outlined” variant.
### Expected behavior
The page should remain stable and not scroll when the password field is focused, regardless of the variant in use.
### Context
This issue affects the usability of forms with password fields in Safari by causing unexpected scrolling behavior. The issue appears to be related to Safari’s password suggestion feature, affecting both mobile and desktop Safari users.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 14.6.1 / iOS 17
Binaries:
Node: 20.16.0
npm: 10.8.1
Browsers:
Safari: 17.6 (macOS), iOS Safari (iOS 17)
npmPackages:
@mui/material: ^6.1.6
react: ^18.2.0
```
</details>
**Search keywords**: TextField password scroll Safari, TextField password field Safari bug, Safari scroll to top password field, MUI TextField password variant issue | bug 🐛,component: text field,browser: Safari | low | Critical |
2,653,230,620 | flutter | With EnsureSemantics on, Buttons Maintain Focus State on Press | ### Steps to reproduce
On flutter beta version 3.27.0-0.1.pre
1. Turn on semantics auto rendering for application
2. Have an app with a button that doesn't result in navigation
3. Launch on web
4. Press button
### Expected results
Focus state and visual should only be showing on keyboard navigation (i.e. tab)
### Actual results
If you press a button/interactive component (that doesn't navigate), after release, the button will both have focus and show focus visual. If autorendering of semantics is disabled, then this issue doesn't happen.
### Code sample
With favorites sample
<details open><summary>Code sample</summary>
```dart
Widget build(BuildContext context) {
SemanticsBinding.instance.ensureSemantics();
return ChangeNotifierProvider<Favorites>(
create: (context) => Favorites(),
child: MaterialApp.router(
title: 'Testing Sample',
theme: ThemeData(
colorSchemeSeed: Colors.blue,
visualDensity: VisualDensity.adaptivePlatformDensity,
),
routerConfig: router(),
),
);
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/7c0cecc3-9f8a-45c7-8dde-36464811e8f5
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Flutter 3.27.0-0.1.pre • channel beta •
https://emma-roudabush-disney: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@github.com
/flutter/flutter.git
Framework • revision 2e2c358c9b (3 weeks ago) • 2024-10-22 11:02:13 -0400
Engine • revision af0f0d559c
Tools • Dart 3.6.0 (build 3.6.0-334.3.beta) • DevTools 2.40.1
emmaroudabush@V17CNW73XQ sandbox % flutter doctor -v
[!] Flutter (Channel beta, 3.27.0-0.1.pre, on macOS 14.5 23F79 darwin-arm64,
locale en-US)
• Flutter version 3.27.0-0.1.pre on channel beta at
/Users/emmaroudabush/Source/flutter
! Upstream repository
https://emma-roudabush-disney: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@gith
ub.com/flutter/flutter.git is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to
https://emma-roudabush-disney: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@gith
ub.com/flutter/flutter.git to dismiss this error.
• Framework revision 2e2c358c9b (3 weeks ago), 2024-10-22 11:02:13 -0400
• Engine revision af0f0d559c
• Dart version 3.6.0 (build 3.6.0-334.3.beta)
• DevTools version 2.40.1
• If those were intentional, you can disregard the above warnings; however it
is recommended to use "git" directly to perform update checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version
32.1.0-rc1)
• Android SDK at /Users/emmaroudabush/Library/Android/sdk
• Platform android-34, build-tools 32.1.0-rc1
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Users/emmaroudabush/Downloads/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.95.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
```
</details>
| c: regression,framework,platform-web,has reproducible steps,P2,customer: castaway,team-web,triaged-web,found in release: 3.27 | low | Critical |
2,653,290,096 | TypeScript | `types-registry` keeps installing every time VS Code starts even if explicitly disabled | ### 🔎 Search Terms
VS Code types-registry autoinstall
types-registry jsconfig
types-registry devcontainers
### 🕗 Version & Regression Information
This changed after I updated the Dev Container layout.
The new Layout is similar to:
```mermaid
classDiagram
class Docker {
<<Container>>
Main Container
Attached to VS Code via Dev Containers Extension
home(/home/dev)
bin(/usr/local/bin)
code(/var/www/html)
}
class Node {
<<Container>>
NodeJS Container
Vanilla Docker Hub image
home(/home/node)
bin(/usr/local/bin)
code(/var/www/html)
}
class PHP {
<<Container>>
PHP Container
Vanilla Docker Hub image
home(/home/www-data)
bin(/usr/local/bin)
code(/var/www/html)
}
class Home{
<<Volume>>
.vscode/
.npm/
.yarn/
.cache/
.zshrc
...
}
class Bin{
<<Volume>>
php
node
npm
yarn
}
class Code{
<<Volume>>
index.php
index.js
}
Docker <-- Home
Docker <-- Bin
Docker <-- Code
Node <-- Home
Node <-- Bin
Node <-- Code
PHP <-- Home
PHP <-- Bin
PHP <-- Code
```
The Dev Container uses Docker Compose.
The helpers script in bin volume uses `docker` (via an exposed port in the host) to call the respective bin from each container (within the same compose project). The permissions from the 3 bind mounts are the same, and the users are the same (ID, Group, Groups IDs etc.).
I know the Dev Container layout is not a problem, as no extension shows any concern, even the PHP extension recognizes correctly the PHP version.
I have:
- **VS Code**
Version: 1.95.2 (Universal)
Commit: e8653663e8840adaf45af01eab5c627a5af81807
Date: 2024-11-07T11:07:22.054Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Darwin x64 22.6.0
- **Docker**
Version: 27.3.1
Build: ce12230
- **Node**
ImageTag: 20-alpine
ImageSha: sha256:45a59611ca84e7fa7e39413f7a657afd43e2b71b8d6cb3cb722d97ffc842decb
### ⏯ Playground Link
_No response_
### 💻 Code
I have this `jsconfig.json` file in the `/var/www/html` folder which is the root of the project.
```json
{
"compilerOptions": {
"typeRoots": [
"**/*.d.ts"
],
"baseUrl": ".",
"paths": {
"@": [
"resources/js"
]
}
},
"allowJS": true,
"typeAcquisition": {
"enable": false
}
}
```
Also, I configured the `npm.packageManager` in both, the User and Workspace, to use `yarn` (as it is my preferred method to install packages).
### 🙁 Actual behavior
VS Code installs the `types-registry` package without taking into account the settings explicitly set in the `jsconfig.json` file or the `npm.packageManager` and install the package into the current project.
### 🙂 Expected behavior
I would expect VS Code to not install the `types-registry` package if it is forbidden, let alone install it into the project folder as a package for the project.
Or if it does download it, it would obey the `npm.packageManager` configuration and install it using `yarn`.
### Additional information about the issue
_No response_ | Needs Investigation | low | Minor |
2,653,359,916 | langchain | HuggingfaceDatasetLoader escapes strings instead of returning them raw | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code converts every instance with json.dumps:
```
class HuggingFaceDatasetLoader(BaseLoader):
...
def parse_obj(self, page_content: Union[str, object]) -> str:
if isinstance(page_content, object):
return json.dumps(page_content)
return page_content
```
This leads to double escape characters in the strings such as: "\n" converted to "\\\\n".
A short fix (not the best one, but working) is to implement a check for strings first:
```
class HuggingFaceDatasetLoader(BaseLoader):
...
def parse_obj(self, page_content: Union[str, object]) -> str:
if isinstance(page_content, str):
return page_content
return json.dumps(page_content)
```
### Error Message and Stack Trace (if applicable)
Noe error message
### Description
I am trying to load HuggingFace datasets with markdown formatted strings. This leads to double escaped characters such as "\n" to "\\\\n" due to json.dumps() in the original code. This is may caused by every object of type "str" being also an object.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #115~20.04.1-Ubuntu SMP Mon Apr 15 17:33:04 UTC 2024
> Python Version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.3.17
> langchain: 0.3.7
> langchain_community: 0.3.7
> langsmith: 0.1.142
> langchain_huggingface: 0.1.2
> langchain_ollama: 0.2.0
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.0
> httpx-sse: 0.4.0
> huggingface-hub: 0.26.2
> jsonpatch: 1.33
> numpy: 1.26.4
> ollama: 0.3.3
> orjson: 3.10.11
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.3.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tokenizers: 0.20.3
> transformers: 4.46.2
> typing-extensions: 4.11.0 | 🤖:bug | low | Critical |
2,653,390,540 | rust | Tracking Issue for `NonZero<u*>::div_ceil` | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(unsigned_nonzero_div_ceil)]`
This is a tracking issue for implementing `div_ceil` for `NonZero<T>` where `T` is an unsigned integer.
<!--
Include a short description of the feature.
-->
### Public API
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
```rust
// core::num
impl NonZero<u8> { // similarly for u16, u32, u64, u128 & usize
pub const fn div_ceil(self, other: Self) -> Self;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] ACP: https://github.com/rust-lang/libs-team/issues/471
- [x] Implementation: #132665
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,653,400,280 | PowerToys | have the ability to do a character count with win + q | ### Description of the new feature / enhancement
Have the ability to do a character count with Win + Q like maybe with the command -- or something. It's an important feature which I'd use.
### Scenario when this would be used?
If someone's writing something in a platform where a character count isn't shown, I know it's a low chance, but it's a very useful feature to have just in case.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,653,429,108 | pytorch | Inductor inappropriately tries to fuse scalar views of a CPU tensor into GPU kernels. | ### 🐛 Describe the bug
```
import torch
@torch.compile
def f(a, b):
return a + b[0]
f(torch.randn(20, device='cuda'), torch.randn(4, device='cpu'))
```
```
self.launch(*args, **kwargs)
ValueError: Pointer argument (at 1) cannot be accessed from Triton (cpu tensor?)
```
cc: @desert
### Versions
N/A
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,bug,oncall: pt2,module: inductor | low | Critical |
2,653,429,762 | pytorch | Making nn.Module generic | ### 🐛 Describe the bug
I was chatting with the type checking team at Meta and I think we have line of sight to making nn.Module generic.
To recap the constraints:
* We cannot make nn.Module inherit from Generic because this changes the metaclass and is breaking
* We prefer to NOT have a pyi file for module.py
* We need to support downstream use of nn.Module in type signatures without future annotations
Here is the recipe:
* We use a `if TYPE_CHECKING` block to swap between the Generic and non-Generic version of nn.Module. The Generic version can inherit from Generic but it will have no runtime effect because it is TYPE_CHECKING only. It is somewhat irritating we have to reproduce the types for all methods on nn.Module but this seems unavoidable. We have to pay for an indentation tax for this
* We add a `__class_getitem__` on Module which does nothing so that direct use of `nn.Module[Blah]` works
* We use ParamSpec https://peps.python.org/pep-0612/ to allow for accurately typing arguments/return type
I will happily shepherd a PR if someone wants to make an attempt at this.
### Versions
main
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet @xuzhao9 @gramster | module: nn,module: typing,triaged | low | Critical |
2,653,437,404 | vscode | Issue Reporter should warn against including private or sensitive info | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Since issues created by the Issue Reporter can end up in public repos, the issue reporter should warn users not to include private or sensitive data in the title or details of the issue.
While there is a notice to "[review the guidance we provide](https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions)", it doesn't mention anything about private or sensitive information.

| bug,issue-reporter | low | Critical |
2,653,460,952 | TypeScript | NuGet packages not available for pre-release versions of 5.7 | ### Acknowledgement
- [x] I acknowledge that issues using this template may be closed without further explanation at the maintainer's discretion.
### Comment
I was looking to test the update Microsoft.TypeScript.MSBuild nuget package v5.7 (any prerelease) buy couldn't find them in nuget.org. Are pre-release versions no longer being published? Can we still expect the stable release to be published as nuget? | Needs Investigation | low | Minor |
2,653,472,150 | PowerToys | Mouse Without Borders is not working on my PC; the "New Key" and "Connect" buttons do nothing. | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
I installed PowerToys through the MS Store on both, my work laptop and PC.
Both have version 0.86.0.
On the laptop, I can generate keys and click the connect button; however, on the PC, the "New Key" and "Connect" buttons do nothing.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Major |
2,653,483,988 | pytorch | Ban all operations on sympy expressions in Inductor | ### 🚀 The feature, motivation and pitch
It's very easy to write code that works with static shapes but not dynamic ones. It's also easy to write code that mostly works with dynamic shapes (i.e. `a > 0`), but will fail sometimes.
I think we should just ban all operations directly on sympy expressions in Inductor, and force usage to be routed through some smaller set of functions.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: dynamic shapes,module: inductor | low | Minor |
2,653,596,377 | godot | "Cannot access a disposed object" when using an exported node and scene inheritance | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8]
- Reproducible in v4.4.dev4.mono.official [36e6207bb]
### System information
M2 macOS - Sonoma 14.6.1
### Issue description
In the attached project, try running `Derived.tscn`. It will output this:
```
E 0:00:00:0674 GodotObject.base.cs:78 @ nint Godot.GodotObject.GetPtr(Godot.GodotObject): System.ObjectDisposedException: Cannot access a disposed object.
Object name: 'foo.Base'.
<C# Error> System.ObjectDisposedException
<C# Source> /root/godot/modules/mono/glue/GodotSharp/GodotSharp/Core/GodotObject.base.cs:78 @ nint Godot.GodotObject.GetPtr(Godot.GodotObject)
<Stack Trace> GodotObject.base.cs:78 @ nint Godot.GodotObject.GetPtr(Godot.GodotObject)
Node.cs:752 @ Godot.StringName Godot.Node.GetName()
Node.cs:374 @ Godot.StringName Godot.Node.get_Name()
NodeReferencer.cs:13 @ void foo.NodeReferencer._Ready()
Node.cs:2401 @ bool Godot.Node.InvokeGodotClassMethod(Godot.NativeInterop.godot_string_name&, Godot.NativeInterop.NativeVariantPtrArgs, Godot.NativeInterop.godot_variant&)
foo.NodeReferencer_ScriptMethods.generated.cs:40 @ bool foo.NodeReferencer.InvokeGodotClassMethod(Godot.NativeInterop.godot_string_name&, Godot.NativeInterop.NativeVariantPtrArgs, Godot.NativeInterop.godot_variant&)
CSharpInstanceBridge.cs:24 @ Godot.NativeInterop.godot_bool Godot.Bridge.CSharpInstanceBridge.Call(nint, Godot.NativeInterop.godot_string_name*, Godot.NativeInterop.godot_variant**, int, Godot.NativeInterop.godot_variant_call_error*, Godot.NativeInterop.godot_variant*)
```
For whatever reason, `Base.Dispose` is indeed called. I don't know why that's happening, but as a result, `NodeReferencer` is unable to access the now-disposed node being referenced.
### Steps to reproduce
It seems that you have to do this:
- Make a base scene
- Add a node that references another node via an `Export` attribute
- Derive a scene from that base scene
- Try running the derived scene
This does not seem to reproduce when using GDScript, only C#. In GDScript, I don't see `NOTIFICATION_PREDELETE` being sent.
### Minimal reproduction project (MRP)
[DisposedObjectRepro.zip](https://github.com/user-attachments/files/17723842/DisposedObjectRepro.zip)
| bug,topic:dotnet | low | Critical |
2,653,597,540 | go | proposal: cmd/cover: support branch coverage | ### Proposal Details
Currently, the `go test` tool provides line-based coverage reporting. However, branch coverage support is not available, which limits the granularity of test coverage reporting. Adding branch coverage would be beneficial, as it would allow developers to better understand which branches (conditional paths) in their code are being executed during tests, rather than just which lines.
Not that it matters too much but another point to consider is that many other languages and testing frameworks (e.g., Python's [coverage.py](https://coverage.readthedocs.io/en/latest/branch.html), Java's [Jacoco](https://www.eclemma.org/jacoco/trunk/doc/counters.html)) offer branch coverage.
Third party package like https://github.com/rillig/gobco implements tools that add additional instrumentations to ones generated by cmd/cover, but having this supported by official cover tool would be a nice addition to the Go testing tool.
| Proposal | low | Major |
2,653,607,032 | TypeScript | Add `NonEmptyArray` to `lib` (but **not** `length > 0` narrowing) | ### ⚙ Compilation target
ES2023
### ⚙ Library
ES2023
### Missing / Incorrect Definition
```ts
type NonEmptyArray<T> = [T, ...T[]]
type ReadonlyNonEmptyArray<T> = readonly [T, ...readonly T[]]
```
### Sample Code
```TypeScript
const is_non_empty = <T>(a: ReadonlyArray<T>): a is ReadonlyNonEmptyArray<T> =>
a.length > 0
function sum(a: ReadonlyNonEmptyArray<number>): number
function sum(a: ReadonlyNonEmptyArray<bigint>): bigint
function sum(a:
ReadonlyNonEmptyArray<number> |
ReadonlyNonEmptyArray<bigint>
) {
//@ts-expect-error
return a.reduce((acc, x) => acc + x)
}
```
### Documentation Link
There's no docs that I'm aware of. However, there are [multiple issues using this boilerplate](https://github.com/microsoft/TypeScript/issues?q=Non-Empty%20Array).
Here's a WIP [example implementations of `sum`](https://github.com/Rudxain/ideas/blob/c86e3ed2ee72b9166674516ebbcf179a99d0b2ab/software/sum_impls.ts). The sample code is a simplified version of that. More info [here](https://github.com/microsoft/TypeScript/issues/449#issuecomment-2472031504) | Suggestion,Awaiting More Feedback | low | Critical |
2,653,628,194 | pytorch | [TorchScript] Failure if you script a wrapper module and then an interface-implementing submodule. | ### 🐛 Describe the bug
Repro is below:
* We have a wrapper module that calls an implementation submodule, and the implementation submodule is marked as an interface
* First we torchscript the wrapper module
* Then we torchscript the submodule.
Since the first torchscript-ing of the wrapper module saw the submodule as an interface type, it ignores the methods that are not part of the interface. Then we cache the type. Finally, when we torchscript the submodule on its own, we see the other methods and fail because the jit_type associated with this class doesn't have those methods.
```python
import torch
@torch.jit.interface
class MyInterface(torch.nn.Module):
def forward(self, x: torch.Tensor) -> torch.Tensor:
pass
class MyImplementation(torch.nn.Module):
def forward(self, x: torch.Tensor) -> torch.Tensor:
return x * x
@torch.jit.export
def add_two(self, x: torch.Tensor) -> torch.Tensor:
return x + 2
class MyWrapper(torch.nn.Module):
impl: MyInterface
def __init__(self):
super().__init__()
self.impl = MyImplementation()
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.impl(x)
mod = MyWrapper()
mod_s = torch.jit.script(mod)
mod.impl = torch.jit.script(mod.impl)
```
error
```
File "/data/users/dberard/scripts/interface_extra.py", line 31, in <module>
mod.impl = torch.jit.script(mod.impl)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/dberard/pytorch/torch/jit/_script.py", line 1429, in script
ret = _script_impl(
^^^^^^^^^^^^^
File "/data/users/dberard/pytorch/torch/jit/_script.py", line 1147, in _script_impl
return torch.jit._recursive.create_script_module(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/dberard/pytorch/torch/jit/_recursive.py", line 557, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/dberard/pytorch/torch/jit/_recursive.py", line 679, in create_script_module_impl
script_method = cpp_module._get_method(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Method 'add_two' is not defined.
```
### Versions
main branch, CPU build
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,653,686,366 | rust | Tracking Issue for `PeekableIterator` | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(peekable_iterator)]`
This is a tracking issue for the `PeekableIterator` trait, which extends `Iterator` with `peek` and related methods that inspect the next element without consuming it.
<!--
Include a short description of the feature.
-->
### Public API
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
```rust
// core::iter
pub trait PeekableIterator: Iterator {
type PeekedItem<'a>: Borrow<Self::Item> + 'a
where
Self: 'a;
fn peek(&self) -> Option<PeekedItem<'_>>;
fn next_if(&mut self, func: impl FnOnce(&Self::Item) -> bool) -> Option<Self::Item>;
fn next_if_eq(&mut self, expected: &T) -> Option<Self::Item> where Self::Item: PartialEq<T>, T: ?Sized;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] ACP: rust-lang/libs-team#176
- [ ] Implementation: #132976
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- Should `peek` take `&mut self` or `&self`? `&self` makes sense for iterators such as `core::slice::iter` but precludes implementing the trait on `Peekable`.
- What about the return type of `peek`? We could always make it return `Self::Item` like itertools’s [`PeekingNext`](https://docs.rs/itertools/latest/itertools/trait.PeekingNext.html), but that would prevent this trait from being implemented for consuming iterators such as `vec::IntoIter`, as well as `Peekable` itself.
- If we use an associated type, then what should be the bound for it: `Borrow`, `AsRef`, or something else?
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,653,709,341 | ant-design | Tag suggestion renderer in Select component with mode="tags" | ### What problem does this feature solve?
It can be difficult for the user to understand that the suggested value in the dropdown that will turn in to a tag when selected is not a pre-populated value but rather is based on what they typed. While developers have the ability to customize Options, we could use the ability to customize that one Option specifically to help distinguish it.
### What does the proposed API look like?
I would like to propose a new prop on the Select component `tagOptionRender` which would essentially be the same as `optionRender` (same interface) but specifically for that new tag option.
For example, this would allow the developer to be able to convert something like "new tag" into "Create new tag..." using:
```tagOptionRender={(value: string)=>`Create ${value}...`}```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | help wanted,Inactive | low | Major |
2,653,745,796 | ui | [bug]: Installing Sidebar component overwrites and causes breaking changes in tailwind.config.js | ### Describe the bug
I wanted to try out the new **Sidebar** component. So, I ran:
```bash
npx shadcn@latest add sidebar
```
But during when `npm` is installing it, it updates the `tailwind.config.js`. However, most of configs I defined myself were retained, but the custom `theme.extend.spacing` has it's properties and values removed.
Below is the before and after of the `tailwind.config.js` between installing the **Sidebar** component:
- Before installing:
```js
// tailwind.config.js
/**
* @format
* @type {import('tailwindcss').Config}
*/
// convert px to rem under the hood
const BASE = 16; // your base size
const rem = (px, key = px) => ({ [key]: `${px / BASE}rem` });
export default {
darkMode: ["class"],
content: ["./index.html", "./src/**/*.{js,ts,jsx,tsx}"],
theme: {
extend: {
fontFamily: {
inter: ["Inter", "sans-serif"],
Montserrat: ["Montserrat", "sans-serif"],
},
borderRadius: {
lg: "var(--radius)",
md: "calc(var(--radius) - 2px)",
sm: "calc(var(--radius) - 4px)",
},
colors: {
lavenderBlue: "#EAECF0",
babyMint: "#EBFFF2",
slateGray: "#475467",
white: {
DEFAULT: "#FFFFFF",
100: "#F6FCF7",
200: "#F8F8F8",
},
black: {
100: "#060606",
200: "#484848",
},
yellow: {
100: "#DFA510",
},
smoke: "#F5F5F5",
darkGreen: {
100: "#0A3E19",
},
lightGreen: {
100: "#1A932E",
200: "#92E8AB",
},
green: {
100: "#8DAC94",
},
background: "hsl(var(--background))",
foreground: "hsl(var(--foreground))",
card: {
DEFAULT: "hsl(var(--card))",
foreground: "hsl(var(--card-foreground))",
},
popover: {
DEFAULT: "hsl(var(--popover))",
foreground: "hsl(var(--popover-foreground))",
},
primary: {
DEFAULT: "hsl(var(--primary))",
foreground: "hsl(var(--primary-foreground))",
},
secondary: {
DEFAULT: "hsl(var(--secondary))",
foreground: "hsl(var(--secondary-foreground))",
},
muted: {
DEFAULT: "hsl(var(--muted))",
foreground: "hsl(var(--muted-foreground))",
},
accent: {
DEFAULT: "hsl(var(--accent))",
foreground: "hsl(var(--accent-foreground))",
},
destructive: {
DEFAULT: "hsl(var(--destructive))",
foreground: "hsl(var(--destructive-foreground))",
},
border: "hsl(var(--border))",
input: "hsl(var(--input))",
ring: "hsl(var(--ring))",
chart: {
1: "hsl(var(--chart-1))",
2: "hsl(var(--chart-2))",
3: "hsl(var(--chart-3))",
4: "hsl(var(--chart-4))",
5: "hsl(var(--chart-5))",
},
},
lineHeight: {
0: "0",
},
spacing: {
...rem(2),
...rem(3),
...rem(4),
...rem(6),
...rem(8),
...rem(10),
...rem(12),
...rem(14),
...rem(16),
...rem(18),
...rem(20),
...rem(22),
...rem(24),
...rem(26),
...rem(28),
...rem(30),
...rem(32),
...rem(38),
...
},
flexGrow: {
2: "2",
},
zIndex: {
1: 1,
},
screens: {
tablet: "640px",
// => @media (min-width: 640px) { ... }
laptop: "1024px",
// => @media (min-width: 1024px) { ... }
desktop: "1280px",
// => @media (min-width: 1280px) { ... }
"desktop-lg": "1380px",
wideScreen: "1536px",
// => @media (min-width: 1536px) { ... }
// for viewport heights
"screen-tall": { raw: "(min-height: 800px)" },
"screen-desktop": { raw: "(min-height: 1024px)" },
},
},
},
plugins: [
require("tailwindcss-animate"),
require("tailwind-scrollbar")({
preferredStrategy: "pseudoelements",
nocompatible: true,
}),
],
};
```
- After installing (I haven't touched anything):
```js
// tailwind.config.js
/**
* @format
* @type {import('tailwindcss').Config}
*/
// convert px to rem under the hood
const BASE = 16; // your base size
const rem = (px, key = px) => ({ [key]: `${px / BASE}rem` });
export default {
darkMode: ["class"],
content: ["./index.html", "./src/**/*.{js,ts,jsx,tsx}"],
theme: {
extend: {
fontFamily: {
inter: ["Inter", "sans-serif"],
Montserrat: ["Montserrat", "sans-serif"]
},
borderRadius: {
lg: 'var(--radius)',
md: 'calc(var(--radius) - 2px)',
sm: 'calc(var(--radius) - 4px)'
},
colors: {
lavenderBlue: '#EAECF0',
babyMint: '#EBFFF2',
slateGray: '#475467',
white: {
'100': '#F6FCF7',
'200': '#F8F8F8',
DEFAULT: '#FFFFFF'
},
black: {
'100': '#060606',
'200': '#484848'
},
yellow: {
'100': '#DFA510'
},
smoke: '#F5F5F5',
darkGreen: {
'100': '#0A3E19'
},
lightGreen: {
'100': '#1A932E',
'200': '#92E8AB'
},
green: {
'100': '#8DAC94'
},
background: 'hsl(var(--background))',
foreground: 'hsl(var(--foreground))',
card: {
DEFAULT: 'hsl(var(--card))',
foreground: 'hsl(var(--card-foreground))'
},
popover: {
DEFAULT: 'hsl(var(--popover))',
foreground: 'hsl(var(--popover-foreground))'
},
primary: {
DEFAULT: 'hsl(var(--primary))',
foreground: 'hsl(var(--primary-foreground))'
},
secondary: {
DEFAULT: 'hsl(var(--secondary))',
foreground: 'hsl(var(--secondary-foreground))'
},
muted: {
DEFAULT: 'hsl(var(--muted))',
foreground: 'hsl(var(--muted-foreground))'
},
accent: {
DEFAULT: 'hsl(var(--accent))',
foreground: 'hsl(var(--accent-foreground))'
},
destructive: {
DEFAULT: 'hsl(var(--destructive))',
foreground: 'hsl(var(--destructive-foreground))'
},
border: 'hsl(var(--border))',
input: 'hsl(var(--input))',
ring: 'hsl(var(--ring))',
chart: {
'1': 'hsl(var(--chart-1))',
'2': 'hsl(var(--chart-2))',
'3': 'hsl(var(--chart-3))',
'4': 'hsl(var(--chart-4))',
'5': 'hsl(var(--chart-5))'
},
sidebar: {
DEFAULT: 'hsl(var(--sidebar-background))',
foreground: 'hsl(var(--sidebar-foreground))',
primary: 'hsl(var(--sidebar-primary))',
'primary-foreground': 'hsl(var(--sidebar-primary-foreground))',
accent: 'hsl(var(--sidebar-accent))',
'accent-foreground': 'hsl(var(--sidebar-accent-foreground))',
border: 'hsl(var(--sidebar-border))',
ring: 'hsl(var(--sidebar-ring))'
}
},
lineHeight: {
'0': '0'
},
spacing: {},
flexGrow: {
'2': '2'
},
zIndex: {
'1': '1'
},
screens: {
tablet: '640px',
laptop: '1024px',
desktop: '1280px',
'desktop-lg': '1380px',
wideScreen: '1536px',
'screen-tall': {
raw: '(min-height: 800px)'
},
'screen-desktop': {
raw: '(min-height: 1024px)'
}
}
}
},
plugins: [
require("tailwindcss-animate"),
require("tailwind-scrollbar")({
preferredStrategy: "pseudoelements",
nocompatible: true,
}),
],
};
```
### Affected component/components
Sidebar
### How to reproduce
1. Setup your `tailwind.config.js` and note the configs.
2. Run `npx shadcn@latest add sidebar` and you'd noticed `tailwind.config.js` has changed.
3. Compare result from **step 1** and **step 2** to see what has been removed and what has been added.
4. The problem is that it shouldn't do any other thing than to add the themes needed. It shouldn't remove configs I wrote for `themes.extend.spacing`.

### Codesandbox/StackBlitz link
The above covers all these.
### Logs
```bash
npx shadcn@latest add sidebar
✔ Checking registry.
✔ Updating tailwind.config.js
✔ Updating src/index.css
✔ Installing dependencies.
✔ The file button.tsx already exists. Would you like to overwrite? … yes
✔ The file input.tsx already exists. Would you like to overwrite? … yes
✔ Created 6 files:
- src/components/ui/sidebar.tsx
- src/components/ui/separator.tsx
- src/components/ui/sheet.tsx
- src/components/ui/tooltip.tsx
- src/hooks/use-mobile.tsx
- src/components/ui/skeleton.tsx
ℹ Updated 2 files:
- src/components/ui/button.tsx
- src/components/ui/input.tsx
```
### System Info
```bash
npm v10.9.0
Ubuntu OS, 24.04 LTS.
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,653,764,362 | deno | LSP - Better mapping between specifier and a package.json dep | See https://github.com/denoland/deno/pull/26439#discussion_r1838882482_
Instead of doing what it does, it would be better to actually inspect the package.json to figure out all the possible exports of a package. | bug,lsp | low | Minor |
2,653,765,258 | go | syscall: TestSyscallAllocations/Syscall failures | ```
#!watchflakes
default <- pkg == "syscall" && test == "TestSyscallAllocations/Syscall"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731441288535464225)):
=== RUN TestSyscallAllocations/Syscall
syscall_windows_test.go:256: allocs = 5, want 0
--- FAIL: TestSyscallAllocations/Syscall (0.00s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,653,766,596 | deno | Refactor lsp to separate service structs from data/state | See https://github.com/denoland/deno/pull/26439#discussion_r1838840927_
| refactor,lsp | low | Minor |
2,653,782,655 | pytorch | DISABLED test_comprehensive_diagonal_copy_cuda_float64 (__main__.TestInductorOpInfoCUDA) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_diagonal_copy_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32890545407).
Over the past 3 hours, it has been determined flaky in 13 workflow(s) with 13 failures and 13 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_diagonal_copy_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2199, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1528, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1395, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 955, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 947, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1193, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1153, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 613, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 564, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 351, in compute_grads
return torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1707, in backward
return impl_fn()
^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1697, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2068, in _backward_impl
out = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 135, in call_func_at_runtime_with_args
out = normalize_as_list(f(*args))
^^^^^^^^
TypeError: 'NoneType' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 460, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1164, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 13: SampleInput(input=Tensor[size=(5, 5, 5), device="cuda:0", dtype=torch.float64], args=(), kwargs={'offset': '2', 'dim1': '0', 'dim2': '1'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=13 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_diagonal_copy_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,653,783,572 | pytorch | DISABLED test_comprehensive_fft_fftshift_cuda_float64 (__main__.TestInductorOpInfoCUDA) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_fft_fftshift_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32886793920).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_fft_fftshift_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2199, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1528, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1395, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 955, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 947, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1193, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1153, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 613, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 564, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 351, in compute_grads
return torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1707, in backward
return impl_fn()
^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1697, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2068, in _backward_impl
out = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 135, in call_func_at_runtime_with_args
out = normalize_as_list(f(*args))
^^^^^^^^
TypeError: 'NoneType' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 460, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1164, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(9, 10), device="cuda:0", dtype=torch.float64], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_fft_fftshift_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,653,955,108 | go | build: build failure on gotip-linux-arm64_c4ah72-perf_vs_release | ```
#!watchflakes
default <- builder == "gotip-linux-arm64_c4ah72-perf_vs_release" && repo == "go" && mode == "build"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731450899444144881)):
go: downloading github.com/BurntSushi/toml v1.0.0
go: downloading github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
2024/11/12 18:37:23 Load average: 11.99 3.39 1.16 1/1005 42251
2024/11/12 18:37:23 Waiting for load average to drop below 0.20...
2024/11/12 18:37:53 Load average: 7.26 3.06 1.12 1/1006 42251
2024/11/12 18:37:53 Waiting for load average to drop below 0.20...
2024/11/12 18:38:23 Load average: 4.40 2.77 1.09 1/1006 42251
2024/11/12 18:38:23 Waiting for load average to drop below 0.20...
2024/11/12 18:38:53 Load average: 2.67 2.50 1.05 2/1006 42251
2024/11/12 18:38:53 Waiting for load average to drop below 0.20...
...
[sweet] Running benchmark tile38 for experiment: run 8
[sweet] Running benchmark tile38 for baseline: run 8
[sweet] Running benchmark tile38 for experiment: run 9
[sweet] Running benchmark tile38 for baseline: run 9
[sweet] Running benchmark tile38 for experiment: run 10
[sweet] Running benchmark tile38 for baseline: run 10
[sweet] error: failed to execute benchmarks: cockroachdb
2024/11/12 20:56:32 Error running sweet: error running sweet run: exit status 1
2024/11/12 20:56:32 FAIL
exit status 1
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | medium | Critical |
2,653,955,225 | go | internal/trace: TestTraceStressStartStop/AllocFree failures | ```
#!watchflakes
default <- pkg == "internal/trace" && test == "TestTraceStressStartStop/AllocFree"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731426847419479825)):
=== RUN TestTraceStressStartStop/AllocFree
exec.go:213: test timed out while running command: /home/swarming/.swarming/w/ir/x/w/goroot/bin/go run -race testdata/testprog/stress-start-stop.go
trace_test.go:610: signal: killed
--- FAIL: TestTraceStressStartStop/AllocFree (1511.99s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,654,041,560 | pytorch | C++ standard library functions not found when compiling c10d debug handlers | ### 🐛 Describe the bug
The following error was obtained when building [this](https://github.com/AnacondaRecipes/pytorch-feedstock/tree/PKG-5908-update-2.5.0) conda recipe:
```
[2009/5841] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/distributed/c10d/control_plane/Handlers.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/distributed/c10d/control_plane/Handlers.cpp.o
/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/_build_env/bin/arm64-apple-darwin20.0.0-clang++ -DAT_BUILD_ARM_VEC256_WITH_SLEEF -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DPROTOBUF_USE_DLLS -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/build/aten/src -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/aten/src -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/build -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/onnx -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/build/third_party/onnx -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/nlohmann -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/torch/csrc/api -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/torch/csrc/api/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/caffe2/aten/src/TH -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/build/caffe2/aten/src/TH -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/build/caffe2/aten/src -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/build/caffe2/../aten/src -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/torch/csrc -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/miniz-2.1.0 -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/kineto/libkineto/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/cpp-httplib -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/aten/src/ATen/.. -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/FXdiv/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/c10/.. -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/pthreadpool/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/cpuinfo/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/aten/src/ATen/native/quantized/cpu/qnnpack/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/aten/src/ATen/native/quantized/cpu/qnnpack/src -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/aten/src/ATen/native/quantized/cpu/qnnpack/deps/clog/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/NNPACK/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/FP16/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/tensorpipe -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/build/third_party/tensorpipe -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/tensorpipe/third_party/libnop/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/fmt/include -I/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/flatbuffers/include -isystem /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/build/third_party/gloo -isystem /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/cmake/../third_party/gloo -isystem /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/XNNPACK/include -isystem /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_p/include/eigen3 -isystem /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/INTERFACE -isystem /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/third_party/nlohmann/include -isystem /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/caffe2 -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fmessage-length=0 -isystem /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_p/include -fdebug-prefix-map=/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work=/usr/local/src/conda/libtorch-2.5.1 -fdebug-prefix-map=/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_p=/usr/local/src/conda-prefix -Wno-deprecated-declarations -Wno-unknown-warning-option -Wno-error=unused-command-line-argument -Wno-error=vla-cxx-extension -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_PYTORCH_QNNPACK -DAT_BUILD_ARM_VEC256_WITH_SLEEF -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=braced-scalar-init -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wvla-extension -Wsuggest-override -Wnewline-eof -Winconsistent-missing-override -Winconsistent-missing-destructor-override -Wno-pass-failed -Wno-error=old-style-cast -Wconstant-conversion -Wno-aligned-allocation-unavailable -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -DUSE_MPS -Wno-unused-private-field -Wno-missing-braces -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk -mmacosx-version-min=11.1 -fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -D__NEON__ -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-private-field -fvisibility=hidden -O2 -Wmissing-prototypes -Werror=missing-prototypes -fopenmp=libomp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/distributed/c10d/control_plane/Handlers.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/distributed/c10d/control_plane/Handlers.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/distributed/c10d/control_plane/Handlers.cpp.o -c /Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/torch/csrc/distributed/c10d/control_plane/Handlers.cpp
/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/torch/csrc/distributed/c10d/control_plane/Handlers.cpp:50:8: error: no template named 'unordered_map' in namespace 'std'
std::unordered_map<std::string, HandlerFunc> handlers_{};
~~~~~^
/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/torch/csrc/distributed/c10d/control_plane/Handlers.cpp:37:28: error: implicit instantiation of undefined template 'std::vector<std::string>'
std::vector<std::string> getHandlerNames() {
^
/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/_build_env/bin/../include/c++/v1/iosfwd:260:28: note: template is declared here
class _LIBCPP_TEMPLATE_VIS vector;
^
/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/torch/csrc/distributed/c10d/control_plane/Handlers.cpp:40:30: error: implicit instantiation of undefined template 'std::vector<std::string>'
std::vector<std::string> names;
^
/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/_build_env/bin/../include/c++/v1/iosfwd:260:28: note: template is declared here
class _LIBCPP_TEMPLATE_VIS vector;
^
/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/torch/csrc/distributed/c10d/control_plane/Handlers.cpp:73:26: error: implicit instantiation of undefined template 'std::vector<std::string>'
std::vector<std::string> getHandlerNames() {
^
/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/_build_env/bin/../include/c++/v1/iosfwd:260:28: note: template is declared here
class _LIBCPP_TEMPLATE_VIS vector;
^
/Users/dpetry/miniconda3/envs/bld/conda-bld/libtorch_1731431574707/work/torch/csrc/distributed/c10d/control_plane/Handlers.cpp:50:48: warning: private field 'handlers_' is not used [-Wunused-private-field]
std::unordered_map<std::string, HandlerFunc> handlers_{};
^
1 warning and 4 errors generated.
```
I can't see an #include for unordered_map or vector in control_plane/Handlers.hpp, but I'd be surprised if this is an error in the code. are you able to please tell me what I might be doing wrong?
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 14.0.6
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 16:25:56) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit-Mach-O
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] No relevant packages
[conda] No relevant packages
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,654,046,032 | pytorch | DISABLED test_comprehensive_rsub_cuda_float32 (__main__.TestInductorOpInfoCUDA) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_rsub_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32895973155).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_rsub_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2199, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1528, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 955, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 947, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1193, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1153, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 613, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 564, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 351, in compute_grads
return torch.autograd.grad(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1707, in backward
return impl_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1697, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2068, in _backward_impl
out = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 135, in call_func_at_runtime_with_args
out = normalize_as_list(f(*args))
TypeError: 'NoneType' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 460, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1164, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 4: SampleInput(input=Tensor[size=(5, 10, 5), device="cuda:0", dtype=torch.float32], args=TensorList[Tensor[size=(10, 5), device="cuda:0", dtype=torch.float32]], kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=4 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_rsub_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,654,054,043 | pytorch | Using @torch.compile, there are slight differences in output when run on different GPUs on the same machine. | Here's my code,
And my torch version is, torch 2.2.1+cu121
The same code, when used with torch.compile, produces different outputs on 8 GPUs of the same machine, with the maximum error being **4.7684e-07**. However, when the @torch.compile optimization is not used, the outputs from all eight cards are identical.
@torch.compile
def rmsnorm(hidden_states, weight, eps, dtype):
#import pdb;pdb.set_trace()
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
# variance = hidden_states.pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(variance + eps)
hidden_states = hidden_states.to(dtype)
hidden_states = hidden_states * weight
return hidden_states
cc @mruberry @kurtamohler @ezyang @chauhang @penguinwu | triaged,module: determinism,oncall: pt2 | low | Critical |
2,654,084,726 | ui | [bug]: Popover's scrollbar does not work in the Dialog component | ### Describe the bug
Popover's scrollbar does not work in the Dialog component

### Affected component/components
Popover and Dialog
### How to reproduce
'use client';
import { zodResolver } from '@hookform/resolvers/zod';
import { Check, ChevronsUpDown } from 'lucide-react';
import { useForm } from 'react-hook-form';
import { z } from 'zod';
import { Button } from '@/components/ui/button';
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
DialogTrigger,
} from '@/components/ui/dialog';
import { cn } from '@/lib/utils';
import {
Command,
CommandEmpty,
CommandGroup,
CommandInput,
CommandItem,
CommandList,
} from '@/components/ui/command';
import {
Form,
FormControl,
FormDescription,
FormField,
FormItem,
FormLabel,
FormMessage,
} from '@/components/ui/form';
import {
Popover,
PopoverContent,
PopoverTrigger,
} from '@/components/ui/popover';
const languages = [
{ label: 'English', value: 'en' },
{ label: 'French', value: 'fr' },
{ label: 'German', value: 'de' },
{ label: 'Spanish', value: 'es' },
{ label: 'Portuguese', value: 'pt' },
{ label: 'Russian', value: 'ru' },
{ label: 'Japanese', value: 'ja' },
{ label: 'Korean', value: 'ko' },
{ label: 'Chinese', value: 'zh' },
] as const;
const FormSchema = z.object({
language: z.string({
required_error: 'Please select a language.',
}),
});
export function DialogText() {
const form = useForm<z.infer<typeof FormSchema>>({
resolver: zodResolver(FormSchema),
});
function onSubmit(data: z.infer<typeof FormSchema>) {}
return (
<Dialog>
<DialogTrigger asChild>
<Button variant="outline">Edit Profile</Button>
</DialogTrigger>
<DialogContent className="sm:max-w-[425px]">
<DialogHeader>
<DialogTitle>Edit profile</DialogTitle>
<DialogDescription>
Make changes to your profile here. Click save when you're done.
</DialogDescription>
</DialogHeader>
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)} className="space-y-6">
<FormField
control={form.control}
name="language"
render={({ field }) => (
<FormItem className="flex flex-col">
<FormLabel>Language</FormLabel>
<Popover>
<PopoverTrigger asChild>
<FormControl>
<Button
variant="outline"
role="combobox"
className={cn(
'w-[200px] justify-between',
!field.value && 'text-muted-foreground',
)}
>
{field.value
? languages.find(
(language) => language.value === field.value,
)?.label
: 'Select language'}
<ChevronsUpDown className="ml-2 h-4 w-4 shrink-0 opacity-50" />
</Button>
</FormControl>
</PopoverTrigger>
<PopoverContent className="w-[200px] p-0">
<Command>
<CommandInput placeholder="Search language..." />
<CommandList>
<CommandEmpty>No language found.</CommandEmpty>
<CommandGroup>
{languages.map((language) => (
<CommandItem
value={language.label}
key={language.value}
onSelect={() => {
form.setValue('language', language.value);
}}
>
{language.label}
<Check
className={cn(
'ml-auto',
language.value === field.value
? 'opacity-100'
: 'opacity-0',
)}
/>
</CommandItem>
))}
</CommandGroup>
</CommandList>
</Command>
</PopoverContent>
</Popover>
<FormDescription>
This is the language that will be used in the dashboard.
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
<Button type="submit">Submit</Button>
</form>
</Form>
<DialogFooter>
<Button type="submit">Save changes</Button>
</DialogFooter>
</DialogContent>
</Dialog>
);
}
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
mac
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,654,105,062 | pytorch | Error with fused AdamW | ### 🐛 Describe the bug
```
File "/mnt/clusterstorage/workspace/kevin/ml-monorepo/chadfusion/train_fsdp.py", line 363, in fsdp_train
scaler.step(opt)
File "/usr/local/lib/python3.10/dist-packages/torch/amp/grad_scaler.py", line 443, in step
retval = optimizer.step(*args, **kwargs_)
File "/usr/local/lib/python3.10/dist-packages/torch/optim/optimizer.py", line 487, in wrapper
out = func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/optim/optimizer.py", line 91, in _use_grad
ret = func(self, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/optim/adamw.py", line 220, in step
adamw(
File "/usr/local/lib/python3.10/dist-packages/torch/optim/optimizer.py", line 154, in maybe_fallback
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/optim/adamw.py", line 782, in adamw
func(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/optim/adamw.py", line 712, in _fused_adamw
torch._foreach_sub_(
RuntimeError: output with shape [] doesn't match the broadcast shape [1]
```
712 in adamw.py is this line:
```
if device_found_inf is not None:
torch._foreach_sub_(
device_state_steps, [device_found_inf] * len(device_state_steps)
)
```
The error only appears if I set `fused=True` in AdamW. Error does not appear with a basic testcase.
World size 1.
### Versions
```
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.13
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchdiffeq==0.2.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang | module: optimizer,triaged,module: fsdp | low | Critical |
2,654,127,920 | godot | Can not change color of path through the editor settings | ### Tested versions
Issue occurs in 4.3 and seems to work in 4.2
### System information
Godot v4.3.stable.mono - macOS 14.4.1 - Vulkan (Forward+) - integrated Apple M1 Pro - Apple M1 Pro (10 Threads)
### Issue description
The editor setting has a path color under 3d gizmo's but it seems to be broken.
The path/curve is near invisible on bright maps and it would be helpful to change the color.
### Steps to reproduce
Place curve and go to editor settings editors > 3d gizomos > Path > change color > restart
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing,regression,topic:3d | low | Critical |
2,654,163,562 | TypeScript | Auto-imports from code actions and completions don't match order that "Organize imports" outputs | ### 🔎 Search Terms
"auto-import", "order", "import", "organize imports"
### 🕗 Version & Regression Information
- This is the behavior in every version I tried.
### ⏯ Playground Link
_No response_
### 💻 Code
In a simple npm package with just TypeScript 5.6.3 installed, create a TypeScript file with the following content:
```ts
import { foo } from "@foo";
import { bar } from "@bar";
import { a } from "workspace-a";
import { b } from "workspace-b";
console.log(foo + bar + a + b)
```
Now at the end of the file type `is` and select the completion `isAccessor` that brings the import from TypeScript.
### 🙁 Actual behavior
The import from the completion is added below `import { b } from "workspace-b";`, but notice how the import gets moved to the middle if one runs the "Organize imports" command in VS Code.
### 🙂 Expected behavior
For completions and code actions that add an import to use the same ordering logic that the "Organize imports" command uses. After all this seems to be advertised by https://github.com/microsoft/TypeScript/pull/52115 ("These rules will also apply to auto-imports and import fixes, which will use the user's collation preferences to determine if a list of import or export specifiers is already sorted before selecting an insertion point").
### Additional information about the issue
_No response_ | Bug | low | Minor |
2,654,210,839 | rust | Tracking Issue for Generic Constant Arguments MVP | <!--
NOTE: For library features, please use the "Library Tracking Issue" template instead.
Thank you for creating a tracking issue! 📜 Tracking issues are for tracking a
feature from implementation to stabilisation. Make sure to include the relevant
RFC for the feature if it has one. Otherwise provide a short summary of the
feature and link any relevant PRs or issues, and remove any sections that are
not relevant to the feature.
Remember to add team labels to the tracking issue.
For a language team feature, this would e.g., be `T-lang`.
Such a feature should also be labeled with e.g., `F-my_feature`.
This label is used to associate issues (e.g., bugs and design questions) to the feature.
-->
This is a tracking issue for the prototype described in rust-lang/rust-project-goals#100.
The feature gate for the issue is `#![feature(min_generic_const_args)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
<!--
Include each step required to complete the feature. Typically this is a PR
implementing a feature, followed by a PR that stabilises the feature. However
for larger features an implementation could be broken up into multiple PRs.
-->
- [x] Add `ConstArgKind::Path` (https://github.com/rust-lang/rust/pull/125915)
- [x] Use `ConstArgKind::Path` for all single-segment paths (https://github.com/rust-lang/rust/pull/131081)
- [ ] Use `ConstArgKind::Path` for all paths
- [ ] Implement support for
- [ ] #132985 (#134873)
- [ ] #132986
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[nightly-style-procedure]: https://github.com/rust-lang/style-team/blob/main/nightly-style-procedure.md
[Style Guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised.
-->
TODO
### Implementation history
- #125915
- #129137
- #131081
- #134873 | T-lang,T-compiler,C-tracking-issue,A-const-generics,PG-const-generics,T-types,F-min_generic_const_args | low | Critical |
2,654,229,514 | rust | ICE: `rust abi shouldn't use on_stack` | <!--
[31mICE[0m: Rustc ./a.rs '-Zcrate-attr=feature(rust_cold_cc) -Clink-dead-code=true -ooutputfile -Zdump-mir-dir=dir' 'thread 'rustc' panicked at compiler/rustc_ty_utils/src/abi.rs:477:17: 'rust abi shouldn't use on_stack'', 'thread 'rustc' panicked at compiler/rustc_ty_utils/src/abi.rs:477:17: 'rust abi shouldn't use on_stack''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
//@compile-flags: -Clink-dead-code=true
#![feature(rust_cold_cc)]
#[repr(C)]
struct F1(*const ());
#[repr(C)]
struct F2(*const ());
#[repr(C)]
struct F3(*const ());
#[repr(C)]
struct F4(*const ());
#[repr(C)]
struct F5(*const ());
#[repr(C)]
struct F6(*const ());
#[repr(C)]
struct B {
f1: F1,
f2: F2,
f3: F3,
f4: F4,
f5: F5,
f6: F6,
}
extern "rust-cold" fn foo(_: B) {}
fn main() {}
````
original:
````rust
//@ check-pass
#![recursion_limit = "5"]
#![allow(unused)]
#![deny(improper_ctypes)]
#[repr(C)]
struct F1(*const ());
#[repr(C)]
struct F2(*const ());
#[repr(C)]
struct F3(*const ());
#[repr(C)]
struct F4(*const ());
#[repr(C)]
struct F5(*const ());
#[repr(C)]
struct F6(*const ());
#[repr(C)]
struct B {
f1: F1,
f2: F2,
f3: F3,
f4: F4,
f5: F5,
f6: F6,
}
extern "rust-cold" fn foo(_: B) {}
fn main() {}
````
Version information
````
rustc 1.84.0-nightly (f7273e004 2024-11-12)
binary: rustc
commit-hash: f7273e0044ad8f35ad27282e4ab776af50b61a54
commit-date: 2024-11-12
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/f7273e0044ad8f35ad27282e4ab776af50b61a54/compiler/rustc_ty_utils/src/abi.rs#L471-L483
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zcrate-attr=feature(rust_cold_cc) -Clink-dead-code=true`
<details><summary><strong>Program output</strong></summary>
<p>
```
warning: struct `F1` is never constructed
--> /tmp/icemaker_global_tempdir.96zRzYL4bT05/rustc_testrunner_tmpdir_reporting.H7K5B09iYkG9/mvce.rs:2:8
|
2 | struct F1(*const ());
| ^^
|
= note: `#[warn(dead_code)]` on by default
warning: struct `F2` is never constructed
--> /tmp/icemaker_global_tempdir.96zRzYL4bT05/rustc_testrunner_tmpdir_reporting.H7K5B09iYkG9/mvce.rs:4:8
|
4 | struct F2(*const ());
| ^^
warning: struct `F3` is never constructed
--> /tmp/icemaker_global_tempdir.96zRzYL4bT05/rustc_testrunner_tmpdir_reporting.H7K5B09iYkG9/mvce.rs:6:8
|
6 | struct F3(*const ());
| ^^
warning: struct `F4` is never constructed
--> /tmp/icemaker_global_tempdir.96zRzYL4bT05/rustc_testrunner_tmpdir_reporting.H7K5B09iYkG9/mvce.rs:8:8
|
8 | struct F4(*const ());
| ^^
warning: struct `F5` is never constructed
--> /tmp/icemaker_global_tempdir.96zRzYL4bT05/rustc_testrunner_tmpdir_reporting.H7K5B09iYkG9/mvce.rs:10:8
|
10 | struct F5(*const ());
| ^^
warning: struct `F6` is never constructed
--> /tmp/icemaker_global_tempdir.96zRzYL4bT05/rustc_testrunner_tmpdir_reporting.H7K5B09iYkG9/mvce.rs:12:8
|
12 | struct F6(*const ());
| ^^
warning: struct `B` is never constructed
--> /tmp/icemaker_global_tempdir.96zRzYL4bT05/rustc_testrunner_tmpdir_reporting.H7K5B09iYkG9/mvce.rs:15:8
|
15 | struct B {
| ^
warning: function `foo` is never used
--> /tmp/icemaker_global_tempdir.96zRzYL4bT05/rustc_testrunner_tmpdir_reporting.H7K5B09iYkG9/mvce.rs:24:23
|
24 | extern "rust-cold" fn foo(_: B) {}
| ^^^
thread 'rustc' panicked at compiler/rustc_ty_utils/src/abi.rs:477:17:
rust abi shouldn't use on_stack
stack backtrace:
0: 0x7b42e5444d7a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h89eef71006a12021
1: 0x7b42e5c04126 - core::fmt::write::ha3e9ca569d22f3a9
2: 0x7b42e70d5e91 - std::io::Write::write_fmt::h99880bc7e97ca82b
3: 0x7b42e5444bd2 - std::sys::backtrace::BacktraceLock::print::hfc03349f2a3f19d7
4: 0x7b42e54470d6 - std::panicking::default_hook::{{closure}}::h70f5ed1f326e175f
5: 0x7b42e5446f20 - std::panicking::default_hook::hb18598c1d85282d8
6: 0x7b42e44d5901 - std[1d66e2c2164e10e5]::panicking::update_hook::<alloc[15deac6fe0f616b3]::boxed::Box<rustc_driver_impl[d4843947ea36b84c]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7b42e54477e8 - std::panicking::rust_panic_with_hook::hd24a003b24cf5cbe
8: 0x7b42e5447586 - std::panicking::begin_panic_handler::{{closure}}::hdc473c19107af62a
9: 0x7b42e5445229 - std::sys::backtrace::__rust_end_short_backtrace::hac257b6aa77e4692
10: 0x7b42e544727c - rust_begin_unwind
11: 0x7b42e1ed0bc0 - core::panicking::panic_fmt::hb7c6bf9f04f7c675
12: 0x7b42e5f03fb9 - rustc_ty_utils[35d0d47d2c9173c5]::abi::fn_abi_new_uncached
13: 0x7b42e5eed80f - rustc_ty_utils[35d0d47d2c9173c5]::abi::fn_abi_of_instance
14: 0x7b42e5eec043 - rustc_query_impl[79133480aa573a39]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[79133480aa573a39]::query_impl::fn_abi_of_instance::dynamic_query::{closure#2}::{closure#0}, rustc_middle[6fd9b5ee8acc4b83]::query::erase::Erased<[u8; 16usize]>>
15: 0x7b42e5ee9fe3 - rustc_query_system[f466237e30d2c716]::query::plumbing::try_execute_query::<rustc_query_impl[79133480aa573a39]::DynamicConfig<rustc_query_system[f466237e30d2c716]::query::caches::DefaultCache<rustc_middle[6fd9b5ee8acc4b83]::ty::ParamEnvAnd<(rustc_middle[6fd9b5ee8acc4b83]::ty::instance::Instance, &rustc_middle[6fd9b5ee8acc4b83]::ty::list::RawList<(), rustc_middle[6fd9b5ee8acc4b83]::ty::Ty>)>, rustc_middle[6fd9b5ee8acc4b83]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[79133480aa573a39]::plumbing::QueryCtxt, false>
16: 0x7b42e5ee9bfa - rustc_query_impl[79133480aa573a39]::query_impl::fn_abi_of_instance::get_query_non_incr::__rust_end_short_backtrace
17: 0x7b42e2e295ae - rustc_monomorphize[e87bc633dae94d23]::mono_checks::check_mono_item
18: 0x7b42e63adeae - rustc_query_impl[79133480aa573a39]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[79133480aa573a39]::query_impl::check_mono_item::dynamic_query::{closure#2}::{closure#0}, rustc_middle[6fd9b5ee8acc4b83]::query::erase::Erased<[u8; 0usize]>>
19: 0x7b42e63ad7ea - rustc_query_system[f466237e30d2c716]::query::plumbing::try_execute_query::<rustc_query_impl[79133480aa573a39]::DynamicConfig<rustc_query_system[f466237e30d2c716]::query::caches::DefaultCache<rustc_middle[6fd9b5ee8acc4b83]::ty::instance::Instance, rustc_middle[6fd9b5ee8acc4b83]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[79133480aa573a39]::plumbing::QueryCtxt, false>
20: 0x7b42e63ad493 - rustc_query_impl[79133480aa573a39]::query_impl::check_mono_item::get_query_non_incr::__rust_end_short_backtrace
21: 0x7b42e63b47dd - rustc_monomorphize[e87bc633dae94d23]::collector::collect_items_rec::{closure#0}
22: 0x7b42e6371c43 - rustc_monomorphize[e87bc633dae94d23]::collector::collect_items_rec
23: 0x7b42e6369ab4 - rustc_monomorphize[e87bc633dae94d23]::partitioning::collect_and_partition_mono_items
24: 0x7b42e6c18d24 - rustc_query_impl[79133480aa573a39]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[79133480aa573a39]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2}::{closure#0}, rustc_middle[6fd9b5ee8acc4b83]::query::erase::Erased<[u8; 24usize]>>
25: 0x7b42e6c18d09 - <rustc_query_impl[79133480aa573a39]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2} as core[b88d2412ee64a335]::ops::function::FnOnce<(rustc_middle[6fd9b5ee8acc4b83]::ty::context::TyCtxt, ())>>::call_once
26: 0x7b42e6c188c9 - rustc_query_system[f466237e30d2c716]::query::plumbing::try_execute_query::<rustc_query_impl[79133480aa573a39]::DynamicConfig<rustc_query_system[f466237e30d2c716]::query::caches::SingleCache<rustc_middle[6fd9b5ee8acc4b83]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[79133480aa573a39]::plumbing::QueryCtxt, false>
27: 0x7b42e6c185e0 - rustc_query_impl[79133480aa573a39]::query_impl::collect_and_partition_mono_items::get_query_non_incr::__rust_end_short_backtrace
28: 0x7b42e6b339c7 - <rustc_codegen_llvm[b1599222c2b690f]::LlvmCodegenBackend as rustc_codegen_ssa[b3f0f00cfe7398f]::traits::backend::CodegenBackend>::codegen_crate
29: 0x7b42e6cbd367 - <rustc_interface[8115c8704cecad87]::queries::Linker>::codegen_and_build_linker
30: 0x7b42e6ad606a - rustc_interface[8115c8704cecad87]::interface::run_compiler::<core[b88d2412ee64a335]::result::Result<(), rustc_span[b00456e1c008159a]::ErrorGuaranteed>, rustc_driver_impl[d4843947ea36b84c]::run_compiler::{closure#0}>::{closure#1}
31: 0x7b42e6b3d950 - std[1d66e2c2164e10e5]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8115c8704cecad87]::util::run_in_thread_with_globals<rustc_interface[8115c8704cecad87]::util::run_in_thread_pool_with_globals<rustc_interface[8115c8704cecad87]::interface::run_compiler<core[b88d2412ee64a335]::result::Result<(), rustc_span[b00456e1c008159a]::ErrorGuaranteed>, rustc_driver_impl[d4843947ea36b84c]::run_compiler::{closure#0}>::{closure#1}, core[b88d2412ee64a335]::result::Result<(), rustc_span[b00456e1c008159a]::ErrorGuaranteed>>::{closure#0}, core[b88d2412ee64a335]::result::Result<(), rustc_span[b00456e1c008159a]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[b88d2412ee64a335]::result::Result<(), rustc_span[b00456e1c008159a]::ErrorGuaranteed>>
32: 0x7b42e6b3dd6b - <<std[1d66e2c2164e10e5]::thread::Builder>::spawn_unchecked_<rustc_interface[8115c8704cecad87]::util::run_in_thread_with_globals<rustc_interface[8115c8704cecad87]::util::run_in_thread_pool_with_globals<rustc_interface[8115c8704cecad87]::interface::run_compiler<core[b88d2412ee64a335]::result::Result<(), rustc_span[b00456e1c008159a]::ErrorGuaranteed>, rustc_driver_impl[d4843947ea36b84c]::run_compiler::{closure#0}>::{closure#1}, core[b88d2412ee64a335]::result::Result<(), rustc_span[b00456e1c008159a]::ErrorGuaranteed>>::{closure#0}, core[b88d2412ee64a335]::result::Result<(), rustc_span[b00456e1c008159a]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[b88d2412ee64a335]::result::Result<(), rustc_span[b00456e1c008159a]::ErrorGuaranteed>>::{closure#1} as core[b88d2412ee64a335]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
33: 0x7b42e6b3e839 - std::sys::pal::unix::thread::Thread::new::thread_start::hfef00d5abafeaf0a
34: 0x7b42e841639d - <unknown>
35: 0x7b42e849b49c - <unknown>
36: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.84.0-nightly (f7273e004 2024-11-12) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z crate-attr=feature(rust_cold_cc) -C link-dead-code=true -Z dump-mir-dir=dir
query stack during panic:
#0 [fn_abi_of_instance] computing call ABI of `foo`
#1 [check_mono_item] monomorphization-time checking
end of query stack
warning: 8 warnings emitted
```
</p>
</details>
<!--
query stack:
#0 [fn_abi_of_instance] computing call ABI of `foo`
#1 [check_mono_item] monomorphization-time checking
-->
@rustbot label +F-rust_cold_cc | I-ICE,T-compiler,C-bug,S-bug-has-test,A-ABI,F-rust_cold_cc | low | Critical |
2,654,274,318 | godot | [4.4 dev 4] RenderingServer.InstanceCreate() with large instance count triggers seg fault | ### Tested versions
- works very well in 4.4 dev 3
- breaks in 4.4 dev 4
### System information
Windows 10 and 11 - Godot 4.4 dev 4 - Ryzen 5800x3d + GTX 3080ti & Ryzen 7435HS + GTX 4060
### Issue description
Calls made for RenderingServer.InstanceCreate started failing in 4.4 dev4 with the following exception message:
Exception thrown at 0x00007FF6C409E72A in Godot_v4.4-dev4_mono_win64.exe: 0xC0000005: Access violation writing location 0x00000000000001A0.

Call stack looks like this.
### Steps to reproduce
Try to build and launch the project attached below
It will launch quickly in 4.4 dev 3
It will crash in 4.4 dev 4
The instance count is essential for achieving the gameplay, effect & scale of my game.
And it actually works quite well in 4.4 dev3, (the none minimal repro project) used to run at 150fps fully animated on my 3080ti/5800x3d build.
### Minimal reproduction project (MRP)
[renderingservertest.zip](https://github.com/user-attachments/files/17726968/renderingservertest.zip)
| bug,topic:rendering,crash,regression | low | Critical |
2,654,288,649 | rust | Support `ConstArgKind::Path`s for const struct/variant constructors | Requires encoding const ctors in MIR metadata (not currently done for some reason). May affect perf. cc #131081. | C-enhancement,T-compiler,F-min_generic_const_args | low | Minor |
2,654,290,389 | electron | [Upgrades Follow Up]: re-enable thin LTO on Mac release builds | When trying to reland Node 22 in https://github.com/electron/electron/pull/44597, we discovered that thin LTO was the root cause of several symbols being stripped from the release build only on MacOS. This does not seem to be an existing problem in either Chromium or upstream Node.
While we debug this, we do want to land Node 22 in main, and root out any bugs well before an alpha/beta branch point. This issue is to reland thin LTO on Mac before Electron 35's stable release. | component/node-integration,upgrade-follow-up | low | Critical |
2,654,290,534 | rust | Support `ConstArgKind::Path` for uses of `static` paths | Requires allowing constarg -> valtree lowering outside of ctfe. cc #131081 | C-enhancement,T-compiler,F-min_generic_const_args | low | Minor |
2,654,369,549 | pytorch | [autograd engine && compiled autograd] support for partial backward compilation | ### 🚀 The feature, motivation and pitch
Hello, in our task, we aim to optimize some specific layers during backpropagation while keeping other layers' backpropagation and the complete forward pass in eager mode. As PyTorch doesn't currently support partial backward compilation directly, we have attempted to achieve this by decomposing the full backward pass into blocked ones via `torch.autograd.backward(..., inputs=[blocked_parameters, boundry_activation])`:
```python
# backward-block2: eager
torch.auto.backward(loss, inputs=[model_block_2.parameters, activation_1])
# backward-block1: compile
with with torch._dynamo.compiled_autograd.enable(torch.compile(backend=my_compiler, fullgraph=False))::
torch.auto.backward(activation_1, activation_1.grad, inputs=[model_block_1.parameters, activation_0])
# backward-block0: eager
torch.auto.backward(activation_0, activation_0.grad, inputs=[model_block_0.parameters])
```
During the process, we encountered some problems and made some attempts as follows:
- In "backward-block2", we must set `retain_graph=True`, otherwise the variables related to `activation_1` will be released and the subsequent backward block can not be launched successfully. However, `retain_graph=True` leads to unignorable memory overhead and `activation_1`'s grad_fn is not necessary for `activation_1`'s gradient calculation. Is there any method to prevent the inclusion of `activation_1`'s grad_fn in the GraphTask?
- In "backward-block1", when we pass a non-leaf activation into `inputs`, the compiled autograd throws the error: `Runtime error: retains_grad_hooks not implemented for compiled autograd`. We temporarily comment the `TORCH_CHECK` for `retain_grad_hook` and access `activation_0`'s gradient via a `param_hook` that writes the gradient into a python global variable, which, however, is very cumbersome. We are quite curious if there are plans to support `retain_grad_hook` in compiled autograd?
- `torch.autograd.backward` encounters an issue when the `inputs` includes non-leaf activations and weight that is shared in different layers, such as the `tied_weight` in higgingface models. The error is `Runtime error: could not compute gradients for some functions`. It seems that the autograd engine attempts to incorporate all compute operations related to the weight gradient into the GraphTask, but in doing so, some operations are overlooked, and certain dependencies cannot be executed as expected. However, in our specific task, we would prefer the autograd engine not to include all dependencies associated with the shared weight, as it hinders the execution of blocked backwards. We are wondering if there is any interest in implementing something like `partial gradient mode `that generates only the minimal subgraph to reach the target in `inputs`, potentially allowing for the omission of certain input branches of the `AccumulateGrad` node?
- When we try to optimize the backward pass by compiled autograd, some unexpected `torch.clone` appears in the traced fx graph, which increases the performance overhead a lot. For more detailed information, you can refer to the relevant GitHub issue at https://github.com/pytorch/pytorch/issues/139862. Is there any method to prevent these `torch.clone` being generated in the traced fx graph?
@bdhirsh @Chillee @xmfan @ezyang
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @yf225 | module: autograd,triaged,oncall: pt2,module: compiled autograd | low | Critical |
2,654,449,372 | vscode | Failed to run Cucumber tests from launch configuration when break point is set in code | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95 and Insider version, works ok on 1.94
- OS Version: Windows 11 + WSL Ubuntu 22.04 + docker + devcontainer based on mcr.microsoft.com/devcontainers/typescript-node:1-18-bullseye
Steps to Reproduce:
1. Open Project with Cucumber tests
2. Set breakpoint in one of the steps
3. Run debug launch configuration for the tests
Result:

Notes:
- Works fine on version 1.94
- Still not working on insider version
- I'm using WDIO + Cucumber + Appium test stack
- Tests are run fine if breakpoint is not set | info-needed | low | Critical |
2,654,578,604 | rust | Should the ConstArgHasType bound above be included in PrarmEnv (query from param_env_reveal_all_normalized)? | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
fn parse_test<const LEN: usize>(path: [&'static str; LEN]) {
let target = ["doc"];
if target.iter().eq(&path) {
println!("get target str");
}
}
fn main() {
parse_test(["doc"]);
}
```
I am implementing an algorithm for function call detection,I identify function calls by iterating over the terminator in the MIR of the function body,recursively detect the functions called by each function
When I get an instance, I use this logic to detect the function it calls (callee)
```rust
fn callees_of<'tcx>(bcx: BtyCtxt<'_, 'tcx>, instance: ty::Instance<'tcx>) -> &'tcx Callees<'tcx> {
let param_env = bcx.tcx.param_env_reveal_all_normalized(instance.def_id());
let mir_body = bcx.instance_mir_expect(instance.def);
let callees = Callees::from_raw_mut(
bcx.tcx
.arena
.dropless
.alloc_from_iter(mir_body.basic_blocks.indices().map(|_| Default::default())),
);
... detect logic ...
}
```
I expected to see this happen: *Can correctly obtain the normalize instance*
Instead, this happened: *In the above example, the bounds of paramenv contains a Binder { value: ConstArgHasType(LEN/#0, usize), bound_vars: [] } When I try to normalize this instance (I think only after normalization can I determine what function it calls) rustc_trait_selection::traits::select::SelectionContext>::evaluate_predicate_recursively -> <rustc_middle::ty::sty::ParamConst>::find_ty_from_env Function not included in paramenvConstArgHasTypeThis type is panic*
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
nightly-2024-10-09
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
<backtrace>
```
</p>
</details>
I want to know if my usage of getting ParamEnv is wrong, or I should merge ParamEnv and add Cosnt bound to ParamEnv (if so, I hope you can tell me how to merge ParamEnv)
| T-compiler,C-discussion,T-types | low | Critical |
2,654,715,577 | pytorch | Enable autograd for padded dense forward / backward operators. | ### 🚀 The feature, motivation and pitch
Hi, I noticed that we already have https://github.com/pytorch/pytorch/pull/125946, which ported fbgemm related jagged tensor operator. Do we have a plan to register its autograd function, like https://github.com/pytorch/FBGEMM/blob/5c980c82d82069702ef5176678c5be6157b5f8b6/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp#L31, so user could invoke it in model script.
@jbschlosser
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @cpuhrsch @jbschlosser @bhosmer @drisspg @davidberard98 @YuqingJ | module: autograd,triaged,module: nestedtensor,actionable | low | Minor |
2,654,728,308 | tauri | [feat] [macOS] App focus gain/loss handlers | ### Describe the problem
I am building a shortcut system for my desktop app, for triggering actions using hotkeys that aren't in any menu. From what I understand, there are two ways to go about this currently:
1. Register keydown js listeners in the webview
1. Register global key listeners on the Rust side using something like [rdev](https://crates.io/crates/rdev).
Both solutions expose a problem on macOS: An application can have focus without having any windows open. For example: without any windows open, I would still like to be able to listen for Command + N, to open a new window.
1. For JS, I'm registering key listeners directly on the window, but there's no way of listening to non-window key pressed afaik.
1. For Rust, I can register global OS shortcuts (using rdev), but I would need to unregister these if the app loses focus. On macOS, this is not the same as checking if any window has focus.
In lieu of an actual key event handler solution on the Rust side (one that uses the app event loop itself), I'm inclined to go for solution 2, but I would need to know if my _app_ has focus or not.
### Describe the solution you'd like
Ideally, we could install an app focus loss/gain handler on the App builder, something along these lines:
```rust
tauri::Builder::default()
.on_app_focus_gain(|app|{ }))
.on_app_focus_loss(|app|{ }))
.run()
```
These lambda's would directly map to [`NSApplicationDelegate::applicationDidBecomeActive()`](https://developer.apple.com/documentation/appkit/nsapplicationdelegate/1428577-applicationdidbecomeactive?language=objc) and its counterpart.
### Alternatives considered
In this particular case I can put Command + N in my main application menu, and it will get triggered, but that is a happy coincidence. This isn't always the case.
### Additional context
- I would like to point out these handlers are useful outside of the scope of a shortcut system. Maybe someone wants to shutdown a thread while the app is inactive, or something like that.
- Ideally I wouldn't even have to use `rdev` to listen for hotkeys and use a Tauri-native solution, but that's a different discussion. | type: feature request,platform: macOS | low | Minor |
2,654,747,425 | deno | Publish deno_cli as a lib crate | Hi, I'd like to embed Deno into my application and have access to module resolution, typescript and the Nodejs stdlib support offered by Deno.
Currently, Deno offers the ability to use the `deno_core`, `deno_runtime`, etc crates to build-your-own JavaScript runtime - however this lacks all the work that has gone into compatibility, resolution, which are vital for any modern application.
I have had limited success achieving this by forking the Deno repo and modifying the `./cli` crate with a `lib.rs`. This also appears to be what supabase do in order to embed Deno.
Essentially, this is just adding a `lib.rs` to the `./cli` crate and obtaining the arguments programmatically rather than via the CLI arguments
**What would be awesome**
```rust
use deno_cli::*;
async fn main() -> anyhow::Result<()> {
let deno = Deno::builder(DenoOptions::default()).build()?;
deno.eval_blocking(DenoEvalOptions {
cwd: env::cwd()?,
code: "console.log('Hello World')",
exts: vec![] // <- this would be great, as would making extensions available to workers
})?
Ok(())
}
```
**What problem does this solve?**
When using Deno as a plugin runtime that transfers a lot of data (for instance a bundler calling out to JavaScript), using inter-process communication and/or shared memory is quite slow/difficult when compared to integrating Deno into the process directly and leveraging the shared memory space.
This is why most applications that use Nodejs as a plugin runtime build their applications as napi extensions.
**Extras**:
- Deno cannot be spawned more than once per process (I guess it's because of the lazy global/static variables?), so embedding it requires maintaining a connection to the running Deno instance - this is a problem because...
- Extensions/ops supplied to the main Deno instance (main JS thread) are not propagated to the JS worker threads and it appears the type signatures make it non-trivial to clone/move extensions into workers. Would be great if extensions could work within workers too
- There are many dependencies that are used by Deno which can conflict with dependencies in an existing project (like swc). I've looked at various ways of addressing this but ultimately the "easiest" approach I found was turning Deno into a library, building it separately as a `cdylib`, embedding the binary within a wrapping Rust crate that has no dependencies and handles the bindings to the binary. It's gross but it works. | suggestion,custom runtime | low | Major |
2,654,787,159 | electron | PDF viewer: open <embed/> shadow DOM to allow customisation of the UI | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
Sometimes it's helpful to be able to customise the PDF viewer UI (eg: hide the Download button, add other buttons, etc.)
Due to shadow DOM encapsulation of the <embed> element, this is currently not possible.
### Proposed Solution
Please consider opening the shadow DOM <embed/> PDF viewer to allow customisation of the UI.
This could be done either by default, or by turning on a preference flag.
### Alternatives Considered
Using custom PDF viewers, PDF.js based, but this has performance issues and is hard to maintain.
### Additional Information
_No response_ | enhancement :sparkles:,component/pdf-viewer | low | Major |
2,654,787,592 | PowerToys | Start file path from Explorer directly in the workspace | ### Description of the new feature / enhancement
I need a folder every morning which is in the same file path. I would find it useful to be able to configure the Explorer in a new workspace so that it starts exactly at this file path. Currently it always fetches the default file path.
### Scenario when this would be used?
For people who want to open the same folder with the file every morning or restart without having to navigate through explorer.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,654,800,678 | react | Bug: Images with `loading="lazy"` remounted with react are all loaded | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 18.3.1
## Steps To Reproduce
Open DevTools Network tab and check "disable cache" to see how much data is requested.
1. Render a long list of `<img>` tags with `loading="lazy"` (placed after the `src` attribute)
2. (only a few top images are loaded, you can see max few hundred kb downloaded)
3. Unrender the list
4. Render the list again
5. (all of them are requested, you can see many mb being downloaded)
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example: https://codesandbox.io/p/sandbox/react-18-forked-2f9q8l?workspaceId=1b03e581-57eb-43f2-80b8-0d9e38ede5b5 (additional explanation below)
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
Under certain circumstances, all the images are loaded when remounted with React, despite `loading="lazy"` being specified.
The example code above, provides two ways in which the images are removed from DOM and then added again: one using react state, and the other using simple native DOM methods. Turns out that only when React renders the list (and only when it's done for the second+ time), the lazy loading is broken. With DOM methods, you can remount the images many times and every time only few top images are loaded.
There's also a button to switch the position of `loading="lazy"` attribute to be before or after `src=`. It shows that order makes difference and issue occurs only when the `loading` attribute is placed after `src`.
There used to be a bug in Firefox that caused the lazy loading to not work when loading attribute is placed after src, but it is gone and was never a problem in Chrome. Yet, somehow, the way in which React adds the nodes to DOM, reproduces the bug even in Chrome.
I am personally experiencing it in Chrome version 131.0.6778.70 (64bit, windows).
## The expected behavior
No matter the order of attributes, the behavior should be as per the standard browser behavior. It shouldn't matter that it's React appending the nodes or which time it's doing it. | Status: Unconfirmed | medium | Critical |
2,654,819,128 | go | x/pkgsite: std's reflect expands with zero subpackages in the directory view | ### What is the URL of the page with the issue?
https://pkg.go.dev/std
### What is your user agent?
Mozilla/5.0 (X11; Linux x86_64; rv:132.0) Gecko/20100101 Firefox/132.0
### Screenshot

### What did you do?
Tried expanding a number of std packages in the directory view to see subpackages.
### What did you see happen?
Any package with the arrow pointing right should expand with subpackages when clicking on the arrow.
### What did you expect to see?
Most packages do that, except reflect, which expands into nothing.
This might be because reflect only has internal sub-packages:
```
$ go list reflect/...
reflect
reflect/internal/example1
reflect/internal/example2
``` | help wanted,pkgsite | low | Minor |
2,654,854,941 | go | x/build/cmd/coordinator: older dashboard pages omit LUCI builders in detailed views (page 2, x/ repo commits) but don't make it apparent it's WAI | ### What did you do?
Navigate to https://build.golang.org/?page=1&branch=master.
### What did you see happen?
Only a small number of builders are displayed:

### What did you expect to see?
All builders. | Builders,NeedsFix,FixPending,Friction | low | Minor |
2,654,856,397 | tauri | [feat] Expose `WindowEvent::KeyboardInput` in Rust | ### Describe the problem
I want to be able to react to key events coming from the application event loop.
I am building a shortcut system for my Desktop app, on macOS. I would like to be able to trigger hotkeys, even if the application doesn't have any windows open. This rules out any js key listener solutions, because they are tied to windows. Plus, this feels like something that should be done over in Rust-land, anyway.
The current recommended solution is to use [rdev](https://crates.io/crates/rdev). This works up to a point (see https://github.com/tauri-apps/tauri/issues/11670), but requires my app to ask for full access to all OS-level keyboard inputs (Accessibility), which is a permission I would rather not depend on.
In reality, this is something that should be solvable by propagating events from the event loop. I scoured through `Tao` a bit and found key event types, so this seems to be within the realm of possibility?
### Describe the solution you'd like
I want to be able to register a key event handler on my app. I envision this would look something like this, but that's of course up for debate:
```rust
tauri::Builder::default()
.on_key_event(|app, event|{ }))
.run()
```
I can imagine plug-ins might want to do this too, so maybe we should have a system to register (and unregister) multiple handlers.
### Alternatives considered
- Binding to key event listeners in js, but this can only be done on windows. Besides, imho this is something to be solved on the Rust side, using the application event loop.
- Using `rdev`, but this requires my app to ask for full keyboard monitoring on an OS-level, which is not a permission I want to ask for, really.
In addition, people have to twiddle with [device_event_filter](https://docs.rs/tauri/latest/tauri/struct.Builder.html#method.device_event_filter), which is neither intuitive nor easy to find.
### Additional context
I've searched through the Discord and issue list, and it seems more people have been requesting this. | type: feature request | low | Major |
2,654,858,615 | pytorch | DISABLED test_comprehensive_linalg_vecdot_cuda_float32 (__main__.TestInductorOpInfoCUDA) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_linalg_vecdot_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32906402916).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 12 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_linalg_vecdot_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2199, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1528, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 955, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 947, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1193, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1153, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 613, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 564, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 351, in compute_grads
return torch.autograd.grad(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1707, in backward
return impl_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1697, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2068, in _backward_impl
out = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 135, in call_func_at_runtime_with_args
out = normalize_as_list(f(*args))
TypeError: 'NoneType' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 460, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1164, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 29: SampleInput(input=Tensor[size=(1, 5), device="cuda:0", dtype=torch.float32], args=TensorList[Tensor[size=(1, 5), device="cuda:0", dtype=torch.float32]], kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=29 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_linalg_vecdot_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,654,904,368 | PowerToys | Alt-Tab map leaves remapped keys pressed | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
A while back I had some help mapping Ctrl-D to act like Alt-Tab. I took a little bit, but I got it to work, and it has worked really well.
Not that long ago, sometime around April of this year but it is hard to be sure when, I started having what I thought was a different issue. My right Ctrl key kept sticking. I thought this was a keyboard switch issue, but after a long and convoluted process I found that the Alt-Tab mapping to Ctrl-D was leaving the right Ctrl key as if it had been pressed down.
As I said, this hotkey worked perfectly for quite a while. Part of what stumped me about this was that it doesn't matter what machine I use. They have different versions of PowerToys though 0.85.1 is the most recent. Whatever keyboard I do use, once the Alt-Tab behavior has finished, the right Ctrl - and only the r Ctrl - acts as if it is still being pressed. Tapping the Ctrl key releases it.
The left Alt button also says it is being pressed even after I release Ctrl-D, but it seems like that might be normal for Alt-Tab. I used Switch Hitter to discover all this.
As I said, this worked well for a long while, and now on 3 different machines, 4 different keyboards, many different versions of PowerTools, the same right Ctrl issues always happens. My only thought - I keep my machines updated. Is this somehow caused my updates to Windows, and can that be overcome?

### ✔️ Expected Behavior
Ctrl-D would (and has) functioned as Alt-Tab does.
### ❌ Actual Behavior
When the Ctrl-D hotkey is used and then released, the right Ctrl key acts as if it is being pressed.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,654,908,312 | storybook | [Bug]: Not able to run storybook 8.4.2 tests with vitest, compilation error with react-swc | ### Describe the bug
I am able to start storybook, and also able to see the coverage with the command `test-storybook --coverage --verbose`
but when i run vitest tests with `vitest --project=storybook`
I am getting below error
```
[plugin:vite:react-swc] × Expected ';', '}' or <eof>
╭─[/__vitest_test__/__test__/2aa264b0-e93d-45d5-9477-c86b6ca530ae/C%3A%2FUsers%2FAkshay%2FDocuments%2Fcode%2Fz%2Fstorybook-vitest%2Fsrc%2Fcomponents%2FErrorDisplay.stories.tsx:2:1]
1 │
2 │ html {
· ──┬─ ─
· ╰── This is the expression part of an expression statement
3 │ padding: 0;
4 │ margin: 0;
5 │ }
╰────
Caused by:
Syntax Error
/__vitest_test__/__test__/2aa264b0-e93d-45d5-9477-c86b6ca530ae/C%3A%2FUsers%2FAkshay%2FDocuments%2Fcode%2Fz%2Fstorybook-vitest%2Fsrc%2Fcomponents%2FErrorDisplay.stories.tsx:2:1
1 |
2 | html {
| ^
3 | padding: 0;
4 | margin: 0;```
```
I am using storybook 8.4.2
### Reproduction link
https://codesandbox.io/p/github/jainaks01-bh/storybook-test-vitest/draft/blazing-water
### Reproduction steps
running command below gives error mentioned above
https://github.com/jainaks01-bh/storybook-test-vitest
`npm run test`
### System
```bash
Storybook Environment Info:
System:
OS: Windows 11 10.0.22631
CPU: (16) x64 AMD Ryzen 7 7730U with Radeon Graphics
Binaries:
Node: 22.7.0 - C:\Program Files\nodejs\node.EXE
Yarn: 1.22.22 - C:\Program Files\nodejs\yarn.CMD
npm: 10.8.2 - C:\Program Files\nodejs\npm.CMD <----- active
Browsers:
Edge: Chromium (128.0.2739.42)
npmPackages:
@storybook/addon-coverage: ^1.0.4 => 1.0.4
@storybook/addon-essentials: ^8.4.2 => 8.4.2
@storybook/addon-interactions: ^8.4.2 => 8.4.2
@storybook/addon-links: ^8.4.2 => 8.4.2
@storybook/addon-onboarding: ^8.4.2 => 8.4.2
@storybook/blocks: ^8.4.2 => 8.4.2
@storybook/experimental-addon-test: ^8.4.2 => 8.4.2
@storybook/react: ^8.4.2 => 8.4.2
@storybook/react-vite: ^8.4.2 => 8.4.2
@storybook/test: ^8.4.2 => 8.4.2
@storybook/test-runner: ^0.19.1 => 0.19.1
before-storybook: file: => 0.0.0
chromatic: ^11.18.1 => 11.18.1
eslint-plugin-storybook: ^0.11.0 => 0.11.0
msw-storybook-addon: ^2.0.4 => 2.0.4
storybook: ^8.4.2 => 8.4.2
```
### Additional context
_No response_ | bug,addon: test | low | Critical |
2,654,957,270 | excalidraw | Revert arrow binding when moving single arrow | As of now, there is no way to rebind an arrow other than dragging the point handle, which is frustrating as one needs to manually adjust every single point. It's especially annoying with slight adjustments, which lead to slightly different arrow length / angle, so it needs to be additionally manually repositioned. Instead, rebinding the arrow should be as simple as dragging it, as we've had it before.
**Technical details**
1. don't bind when moving non-arrow element
2. don't bind when moving multiple arrows or an arrow + other element(s)
**Now**
https://github.com/user-attachments/assets/e9fe82b4-08ad-4968-8eae-96c869081a95
**Before / After**
https://github.com/user-attachments/assets/9b1a8a05-7c6a-4f18-95f6-add99c51a8c0
| enhancement,UX/UI,Arrow Binding | low | Minor |
2,655,029,734 | deno | OpenTelemetry epic | - [x] Land initial support for user created traces + exporting with OTLP
- [x] Land initial support for exporting `console.log` with OTLP (with associated traces)
- [x] Land initial support for user created metrics + exporting with OTLP
- [ ] Configure trace sampling through `OTEL_TRACES_SAMPLER`
- [ ] Respect `OTEL_SDK_DISABLED`
- [ ] Respect `OTEL_PROPAGATORS`
- [x] Ensure telemetry is flushed in all cases:
- [x] `console.log("foo"); // Program naturally exits here`
- [x] `console.log("foo"); Deno.exit(0);`
- [x] `console.log("foo"); throw new Error("uncaught");`
- [x] Ensure that "uncaught" is also collected by logging
- [x] Web worker logs something and immediate calls `self.close()`
- [x] Web worker logs something and calls `Deno.exit()` (`self.close()` alias in workers?)
- [x] Web worker logs something and the main worker immediately calls `worker.terminate()`
- [x] Web worker logs something and throws an exception.
Then there is a bunch of work to auto instrument built in APIs. A very basic initial list that will be expanded:
- [ ] `Deno.serve` / `node:http` (server)
- [x] Create traces for incoming requests
- Ensure that users can set `http.route` attribute on the automatic span
- [ ] Propagation of trace ID from incoming headers
- [ ] Metrics for latency
- [ ] `fetch` / `node:http` (client)
- [x] Create traces for outbound requests
- [ ] Propagation of trace ID into outgoing headers
- [ ] Metrics for latency
- [ ] Metrics for connection pool
| feat | low | Critical |
2,655,039,375 | ollama | AMD Radeon 780M GPU (Pop OS !) System 76 | ### What is the issue?
Hi,
I would like to ask your help.
I am running Ollama with the following GPU, but it seems that it is not picking up my GPU. Is there any advice ?
AMD Ryzen™ 7 7840U processor.
When I **run ollama serve**, it gives me this error. Any advice ?
Thanks
```
2024/11/13 17:40:14 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:11.0.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ihshan/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-13T17:40:14.880+07:00 level=INFO source=images.go:755 msg="total blobs: 0"
time=2024-11-13T17:40:14.880+07:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-13T17:40:14.881+07:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11435 (version 0.4.1)"
time=2024-11-13T17:40:14.881+07:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1477910346/runners
time=2024-11-13T17:40:14.949+07:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-11-13T17:40:14.949+07:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-13T17:40:16.902+07:00 level=INFO source=gpu.go:610 msg="no nvidia devices detected by library /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.03"
time=2024-11-13T17:40:22.056+07:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-11-13T17:40:22.057+07:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=0 total="512.0 MiB"
time=2024-11-13T17:40:22.057+07:00 level=INFO source=amd_linux.go:399 msg="no compatible amdgpu devices detected"
time=2024-11-13T17:40:22.057+07:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-11-13T17:40:22.057+07:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="30.6 GiB" available="23.7 GiB"
```
### OS
Linux
### GPU
AMD
### CPU
Other
### Ollama version
0.4.1 | bug,linux,amd,gpu | medium | Critical |
2,655,046,727 | rust | `//@ {unset-,}{rustc,exec}-env` parsing is a footgun | Apparently
```
//@ rustc-env: RUSTC_BOOTSTRAP=1
```
is not the same as
```
//@ rustc-env:RUSTC_BOOTSTRAP=1
```
compiletest will parse the former as an env var called `⌴RUSTC_BOOTSTRAP` (incl. the whitespace). | E-hard,T-bootstrap,C-bug,A-compiletest,E-needs-design | low | Major |
2,655,071,593 | rust | Exponential time complexity for parser combinator with RPITIT | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
The [`egglog`](https://github.com/egraphs-good/egglog) parser was recently rewritten from a generated parser to a parser combinator. [One commit](https://github.com/egraphs-good/egglog/commit/4c280616db0b1bf2bb850aab0075c8f83acd5f33) changed the `map` combinator from a free function to a trait method (RPIT -> RPITIT), which caused a massive compile-time regression https://github.com/egraphs-good/egglog/issues/468, going from a few seconds to 40s+.
Adding a single no-op `map` increases compile time to 3m30s+
More info in the repo: https://github.com/DaniPopes/egglog-rpitit-repro
@rustbot label +I-compiletime +F-return_position_impl_trait_in_trait
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (f7273e004 2024-11-12)
binary: rustc
commit-hash: f7273e0044ad8f35ad27282e4ab776af50b61a54
commit-date: 2024-11-12
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
---
<details><summary>Copy of README.md</summary>
<p>
Massive compile time difference between `RPITIT` "map" closure and a manually defined `Map` struct that implements the traits.
Extracted from [`egglog`](https://github.com/egraphs-good/egglog) @ [`ca52ac13cb3c0bbacc8e7cc540789521d1019bd2`](https://github.com/egraphs-good/egglog/commit/ca52ac13cb3c0bbacc8e7cc540789521d1019bd2). Issue: <https://github.com/egraphs-good/egglog/issues/468>.
Note that unboxed closures etc is not required, as seen in https://github.com/egraphs-good/egglog/pull/470 this can be fixed on stable by changing the definition but I opted to use the nightly feature to make the diff smaller.
It can also be fixed by moving `map` outside of the trait.
Reproduce:
```bash
# Takes 48s
time cargo clean && cargo build
# Takes 0.2s
git restore . && git apply fix.patch
time cargo clean && cargo build
# Takes 3m30s+ for a single extra no-op map
git restore . && git apply worse.patch
time cargo clean && cargo build
```
On nightly:
```bash
rustc -Vv
rustc 1.84.0-nightly (f7273e004 2024-11-12)
binary: rustc
commit-hash: f7273e0044ad8f35ad27282e4ab776af50b61a54
commit-date: 2024-11-12
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
On stable (with `RUSTC_BOOTSTRAP=1`):
```bash
rustc +stable -Vv
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
---
Output of `cargo rustc -- -Ztime-passes`:
```
time: 0.000; rss: 83MB -> 87MB ( +4MB) setup_global_ctxt
time: 0.006; rss: 87MB -> 117MB ( +31MB) expand_crate
time: 0.006; rss: 87MB -> 117MB ( +31MB) macro_expand_crate
time: 0.003; rss: 117MB -> 125MB ( +7MB) late_resolve_crate
time: 0.003; rss: 117MB -> 125MB ( +7MB) resolve_crate
time: 0.005; rss: 125MB -> 131MB ( +6MB) looking_for_entry_point
time: 0.005; rss: 125MB -> 131MB ( +6MB) unused_lib_feature_checking
time: 0.006; rss: 125MB -> 131MB ( +6MB) misc_checking_1
time: 0.020; rss: 131MB -> 169MB ( +38MB) coherence_checking
time: 0.064; rss: 131MB -> 192MB ( +61MB) type_check_crate
time: 0.026; rss: 192MB -> 199MB ( +7MB) MIR_borrow_checking
time: 43.618; rss: 199MB -> 208MB ( +9MB) MIR_effect_checking
time: 0.004; rss: 208MB -> 208MB ( +0MB) privacy_checking_modules
time: 0.003; rss: 208MB -> 208MB ( +0MB) lint_checking
time: 0.000; rss: 208MB -> 208MB ( +0MB) check_lint_expectations
time: 0.005; rss: 208MB -> 208MB ( +1MB) misc_checking_3
time: 0.002; rss: 208MB -> 210MB ( +1MB) monomorphization_collector_graph_walk
time: 0.000; rss: 212MB -> 219MB ( +8MB) write_allocator_module
time: 0.003; rss: 219MB -> 227MB ( +8MB) compile_first_CGU_batch
time: 0.006; rss: 219MB -> 247MB ( +28MB) codegen_to_LLVM_IR
time: 0.011; rss: 208MB -> 247MB ( +39MB) codegen_crate
time: 0.000; rss: 247MB -> 246MB ( -1MB) check_dirty_clean
time: 0.000; rss: 246MB -> 246MB ( +0MB) incr_comp_persist_dep_graph
time: 0.005; rss: 227MB -> 246MB ( +19MB) LLVM_passes
time: 0.002; rss: 246MB -> 243MB ( -3MB) encode_query_results
time: 0.002; rss: 246MB -> 243MB ( -3MB) incr_comp_serialize_result_cache
time: 0.002; rss: 246MB -> 243MB ( -3MB) incr_comp_persist_result_cache
time: 0.002; rss: 247MB -> 243MB ( -4MB) serialize_dep_graph
time: 0.003; rss: 243MB -> 202MB ( -41MB) free_global_ctxt
time: 0.031; rss: 202MB -> 202MB ( +0MB) run_linker
time: 0.031; rss: 202MB -> 202MB ( +0MB) link_binary
time: 0.031; rss: 202MB -> 202MB ( +0MB) link_crate
time: 0.032; rss: 202MB -> 202MB ( +0MB) link
time: 43.784; rss: 31MB -> 147MB ( +116MB) total
```
---
[`samply`](https://github.com/mstange/samply) profile: <https://share.firefox.dev/3O7AzuO>
</p>
</details>
| I-compiletime,T-compiler,C-bug,E-needs-mcve,F-return_position_impl_trait_in_trait | low | Critical |
2,655,089,530 | flutter | Mac framework_tests_impeller is 2.08% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Mac framework_tests_impeller"
}
-->
The post-submit test builder `Mac framework_tests_impeller` had a flaky ratio 2.08% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Mac%20framework_tests_impeller/3866
Commit: https://github.com/flutter/flutter/commit/4de32b870212f33610be0ec122001d695f373a77
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac%20framework_tests_impeller/3866
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac%20framework_tests_impeller/3802
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac%20framework_tests_impeller/3801
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Mac%20framework_tests_impeller
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P1,c: flake,team-framework,triaged-framework | medium | Major |
2,655,089,612 | flutter | Mac_x64 build_tests_1_4 is 2.08% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Mac_x64 build_tests_1_4"
}
-->
The post-submit test builder `Mac_x64 build_tests_1_4` had a flaky ratio 2.08% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_1_4/4301
Commit: https://github.com/flutter/flutter/commit/95a9b97f88a841497ea986b059c594d9321a08ce
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_1_4/4301
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_1_4/4270
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Mac_x64%20build_tests_1_4
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P2,c: flake,team-tool | low | Major |
2,655,147,835 | tauri | [bug] nsis plugins aren't signed | ### Describe the bug
Nsis plugins inside nsis installer aren't signed with code signing though I enabled code signing.
The app was signed, the DLLs and the installer. but the DLLs inside `$PLUGINSDIR` are not signed.
as a result AVs flag them as virus immediately.
### Reproduction
Download sigcheckGUI https://www.majorgeeks.com/mg/getmirror/sigcheckgui,1.html
Download https://github.com/thewh1teagle/vibe/releases/download/v2.6.6/vibe_2.6.6_x64-setup.exe
Extract the app with 7zip and check the signatures of the files
### Expected behavior
It should be signed by my certificate or by yours (official?)
### Full `tauri info` output
```text
https://github.com/thewh1teagle/vibe
https://github.com/thewh1teagle/vibe/commit/ff020aef26235169541a1ffcea9c0157e8df4311
[✔] Environment
- node: 20.15.1
- pnpm: 9.10.0
- yarn: 1.22.22
- npm: 10.7.0
- bun: 1.1.18
[-] Packages
- tauri 🦀: 2.1.0
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.6
- @tauri-apps/api : 2.1.0 (outdated, latest: 2.1.1)
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-updater 🦀: 2.0.2
- @tauri-apps/plugin-updater : 2.0.0
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
- tauri-plugin-store 🦀: 2.1.0
- @tauri-apps/plugin-store : 2.1.0
- tauri-plugin-process 🦀: 2.0.1
- @tauri-apps/plugin-process : 2.0.0
- tauri-plugin-window-state 🦀: 2.0.2
- @tauri-apps/plugin-window-state : 2.0.0
- tauri-plugin-deep-link 🦀: 2.0.1
- @tauri-apps/plugin-deep-link : 2.0.0
- tauri-plugin-fs 🦀: 2.0.3
- @tauri-apps/plugin-fs : 2.0.2
- tauri-plugin-single-instance 🦀: 2.0.1
- @tauri-apps/plugin-single-instance : not installed!
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
- tauri-plugin-http 🦀: 2.0.3
- @tauri-apps/plugin-http : 2.0.0 (outdated, latest: 2.0.1)
- tauri-plugin-dialog 🦀: 2.0.3
- @tauri-apps/plugin-dialog : 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
I noticed that virustotal flag the nsis plugins as a virus.
By the way signing with self signed certificate is better than unsigned! now windows defender didn't blocked it and virus total has less false positives
https://code.videolan.org/videolan/vlc/-/issues/27469 | type: bug,priority: 1 high,platform: Windows,scope: bundler,status: needs triage | low | Critical |
2,655,149,960 | ant-design | `Dropdown` with `destroyPopupOnHide` set to `false` does not re-render child items when the parent component updates | ### Reproduction link
[](https://codesandbox.io/p/sandbox/dropdown-does-not-re-render-child-st2kxf)
### Steps to reproduce
- Create a dropdown with `destroyPopupOnHide={false}` and with nested menu items
- Make the nested menu item change the state of the parent
- Watch the updates of the sub-menu component
### What is expected?
With `destroyPopupOnHide={false}`, I expect the dropdown items and sub-items to be mounted in the DOM and re-rendered when the parent state changes.
### What is actually happening?
The items and sub-items are mounted but only the items are re-rendered when the parent state changes.
### Initial render:

### Hover over `Parent`:

### Click on `Open Modal`:

The global state updates, and the parent re-renders but not the child.
### Hover over `Parent` again:

The child is re-rendered with the new state, the modal appears.
| Environment | Info |
| --- | --- |
| antd | 5.22.0 |
| React | React 18.3.1 |
| System | Windows 11 |
| Browser | Chrome Version 130.0.6723.119 (Official Build) (64-bit) |
---
We provide a library which allows clients to register new menu items into an existing software. One of the clients discovered the bug when trying to open a Modal through a context menu item. Further investigation show that this is caused by the child component not re-rendering.
The library uses `antd` version 5.6.4, but the problem exists in the latest stable version.
The problem doesn't exist in `antd` version 5.1.6.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🗣 Discussion,Inactive | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.