id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,657,221,965 | flutter | [a11y] Focus is not maintained with local keys if layout changes | ### Context
In my app, we have two versions of a widget that are shown. The user clicks a button to swap between them. The widgets are pretty much completely different apart from this switching button. We use an AnimatedSwitcher as shown in the test code below to swap between versions of the widget.
We want to maintain focus on this button between the two versions of the widget, but I can't see a solution apart from hacking the layout so that the widget stays in the same place in the element/semantic tree. This feels hacky and fragile.
We can't use a GlobalKey due to the AnimatedSwitcher using both widgets at the same time.
Is there something I'm missing?
### Steps to reproduce
1. Take the code below
2. Run in debug mode with screen reader on ios
3. Click on the "moving counter" button using the screenreader
### Expected results
Focus stays on the widget
### Actual results
the a11y focus is reset to the top of the app
### Code sample
<details open>
<summary>Code sample</summary>
```
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// TRY THIS: Try running your application with "flutter run". You'll see
// the application has a purple toolbar. Then, without quitting the app,
// try changing the seedColor in the colorScheme below to Colors.green
// and then invoke "hot reload" (save your changes or press the "hot
// reload" button in a Flutter-supported IDE, or press "r" if you used
// the command line to start the app).
//
// Notice that the counter didn't reset back to zero; the application
// state is not lost during the reload. To reset the state, use hot
// restart instead.
//
// This works for code too, not just values: Most code changes can be
// tested with just a hot reload.
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
final Key semanticKey = Key("moving_button");
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
final movingButton = Semantics(
key: semanticKey,
child: ElevatedButton(
onPressed: _incrementCounter, child: const Text('MOVING Counter')));
final stationaryButton = Semantics(
child: ElevatedButton(
onPressed: _incrementCounter,
child: const Text('STATIONARY Counter')));
return Scaffold(
appBar: AppBar(
// TRY THIS: Try changing the color here to a specific color (to
// Colors.amber, perhaps?) and trigger a hot reload to see the AppBar
// change color while the other colors stay the same.
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
//
// TRY THIS: Invoke "debug painting" (choose the "Toggle Debug Paint"
// action in the IDE, or press "p" in the console), to see the
// wireframe for each widget.
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
Semantics(
child: AnimatedSwitcher(
duration: const Duration(milliseconds: 500),
child: _counter % 2 == 0
? movingButton
: Row(
mainAxisSize: MainAxisSize.min,
children: [
Text('Something else'),
movingButton,
],
),
),
),
stationaryButton
],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/bb4a5de0-fe5d-4400-9183-ae83707cb13a
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/main.dart on Edward’s iPhone in debug mode...
Automatically signing iOS for device deployment using specified development team in Xcode project: MZ7437V6KU
Xcode build done. 14.4s
You may be prompted to give access to control Xcode. Flutter uses Xcode to run your app. If access is not allowed, you can change this through your Settings > Privacy & Security > Automation.
Connecting to VM Service at ws://127.0.0.1:61641/eXC1sev2rz0=/ws
Connected to the VM Service.
Reloaded 1 of 709 libraries in 296ms.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
➜ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.0 24A335
darwin-arm64, locale en-NZ)
[!] Android toolchain - develop for Android devices (Android SDK
version 34.0.0)
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for
more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK
licenses.
See https://flutter.dev/to/macos-android-setup for more
details.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[!] Android Studio (version unknown)
✗ Unable to determine Android Studio version.
✗ android-studio-dir = /
✗ Android Studio not found at /Contents
[✓] VS Code (version 1.95.2)
[✓] Connected device (4 available)
[✓] Network resources
! Doctor found issues in 2 categories.
```
</details>
| framework,a: accessibility,f: focus,has reproducible steps,P3,found in release: 3.24,team-accessibility,triaged-accessibility,found in release: 3.27 | low | Critical |
2,657,232,722 | deno | Markdown --indent-width is ignored | Version: deno 2.0.6
The default indentation is 2, but it should be possible to change it with --indent-width.
The default 2 correctly complains about a width of 4

But when even when I set it to 4, it still complains

If I set it to any other number, it still thinks that the correct width is the default 2

| bug,good first issue,upstream,deno fmt | low | Minor |
2,657,352,453 | go | runtime: 1.23 hangs when running under qemu-user [bisected] | ### Go version
go version go1.23.3 linux/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/root/.cache/go-build'
GOENV='/root/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/root/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/root/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/go/pkg/tool/linux_arm64'
GOVCS=''
GOVERSION='go1.23.3'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/root/.config/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2732334604=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
When I build Go 1.23 with the same version of Go under qemu-user, the build hangs all the way. The issue can be reproduced with a Dockerfile:
```Dockerfile
from quay.io/fedora/fedora:41
RUN dnf install -y @development-tools
RUN curl -O https://go.dev/dl/go1.23.3.src.tar.gz && curl -O https://go.dev/dl/go1.23.3.linux-arm64.tar.gz
RUN tar fx go1.23.3.linux-arm64.tar.gz && mkdir src && tar fx go1.23.3.src.tar.gz -C src
RUN cd src/go/src/; env PATH=$PATH:/go/bin bash make.bash
```
Steps:
1. Setup qemu-user binfmt for a foreign ISA, for example, installs qemu-user-static-aarch64 on Fedora.
2. Build the Dockerfile for specified arch: `podman build --arch aarch64 .`
Tested on Fedora 40 with `6.9.8-200.fc40.x86_64` and Fedora 41 with `6.11.6-300.fc41.x86_64`.
Tested with qemu-user 9.1.1 of riscv64/loongarch64/aarch64.
### What did you see happen?
The go processes of the build hangs forever:
```
│ ├─crun-buildah-buildah663882417.scope
│ │ └─container
│ │ ├─912459 /usr/bin/qemu-aarch64-static /usr/bin/bash make.bash
│ │ ├─914348 /usr/bin/qemu-aarch64-static ./cmd/dist/dist bootstrap -a
│ │ ├─914395 /usr/bin/qemu-aarch64-static /go/bin/go install "-tags=math_big_pure_go compiler_bootstrap purego" bootstrap/cmd/...
│ │ └─914743 /usr/bin/qemu-aarch64-static /go/bin/go install "-tags=math_big_pure_go compiler_bootstrap purego" bootstrap/cmd/...
```
The issue does not exist with Go 1.22. I did a bisect and found the offending commit https://github.com/golang/go/commit/d068c2cb620c1daeedc8b9cce488af45a6c2c889 .
### What did you expect to see?
Build of Go 1.23 runs through under qemu-user. | NeedsInvestigation,compiler/runtime | low | Critical |
2,657,405,649 | PowerToys | Display Current Time Placeholder in PowerToys Run | ### Description of the new feature / enhancement
I would like to suggest a new feature for PowerToys Run that displays the current time as a placeholder when the search bar is empty. This feature would be particularly useful for users who choose to hide their taskbar and need a quick way to check the current time without additional clicks or interactions.
Once the user starts typing, the placeholder with the time would disappear, allowing them to use the search functionality as usual. This minor enhancement could improve user experience by providing quick access to the current time directly from PowerToys Run.
### Scenario when this would be used?
This feature would be used primarily by users who prefer a clean desktop interface with the taskbar hidden. It would allow them to check the current time quickly without needing to unhide the taskbar or switch to another application.
For example, during focused work sessions or presentations where users might hide their taskbar for a cleaner look, having the current time visible in PowerToys Run would provide a convenient and efficient way to stay aware of the time.
### Supporting information
- Many users hide the taskbar to maximize screen real estate or maintain a minimalist desktop setup.
- Implementing a time placeholder in PowerToys Run can enhance productivity by reducing the need to switch contexts just to check the time.
- Similar features can be seen in other productivity tools where contextual information is displayed without disrupting the primary function of the tool.
| Needs-Triage | low | Minor |
2,657,419,523 | tauri | decorations(false) work wrong | ### Describe the bug
decorations(false) have bug,the web content is not fullSize,have a border in the right,just like the pic


### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
✔ WebView2: 126.0.2592.68
✔ MSVC: Visual Studio Professional 2019
✔ rustc: 1.79.0 (129f3b996 2024-06-10)
✔ cargo: 1.79.0 (ffa9cf99a 2024-06-03)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 18.19.0
- pnpm: 9.3.0
- yarn: 1.22.22
- npm: 10.2.3
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage,scope: unstable flag | low | Critical |
2,657,422,663 | react-native | [iOS] Navigation Bar Size Issue with codegenNativeComponent and modalPresentationStyle: .pageSheet in React Native | ### Description
In a React Native environment, I'm using `codegenNativeComponent` to display a native screen. On this native screen, when I present a new screen with `modalPresentationStyle` set to `.pageSheet`, the navigation bar size becomes abnormal. **Although this might appear to be an issue with the native side because I’m using native screens, I don’t experience this problem when presenting a modal with .pageSheet without React Native.**
### Steps to reproduce
1. Clone the repository: [RNPlayground](https://github.com/dp221125/RNPlayground).
2. Navigate to the RNPlayground/react-native-playground folder.
3. Run yarn install.
4. Return to the root directory and run pod install (or bundle install followed by bundle exec pod install if using Bundler).
5. Open RNPlayground.xcworkspace and build the project for an iPhone targeting iOS 16 or 17.
6. When the app launches, tap the "Show Detail" button displayed on the screen.
### React Native Version
0.76.1
### Affected Platforms
Runtime - iOS
### Areas
Other (please specify)
### Output of `npx react-native info`
```text
System:
OS: macOS 14.7
CPU: (10) arm64 Apple M1 Pro
Memory: 227.33 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.17.0
path: ~/.asdf/installs/nodejs/20.17.0/bin/node
Yarn:
version: 4.5.0
path: ~/.asdf/installs/nodejs/20.17.0/bin/yarn
npm:
version: 10.8.2
path: ~/.asdf/plugins/nodejs/shims/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.16.2
path: /Users/seokho/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK: Not Found
IDEs:
Android Studio: 2022.2 AI-222.4459.24.2221.10121639
Xcode:
version: 16.0/16A242
path: /usr/bin/xcodebuild
Languages:
Java: Not Found
Ruby:
version: 3.1.3
path: /Users/seokho/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0
wanted: 15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: 0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
Not a crash, not a failure, just a UI bug.
```
### Reproducer
https://github.com/dp221125/RNPlayground
### Screenshots and Videos
### React Native
https://github.com/user-attachments/assets/39d5709e-ec54-416d-a16b-03674dbb89c9
### iOS Native (not using React Native)
https://github.com/user-attachments/assets/347ac1d7-6210-4117-8452-4a7206ac5df2
### ScreenShot
<img width="534" alt="image" src="https://github.com/user-attachments/assets/9bc98814-3f14-4af2-8952-a099a4425c46">
| Component: Modal,Needs: Triage :mag:,Type: New Architecture | medium | Critical |
2,657,436,488 | next.js | Cannot debug Next15 using turbopack on windows | ### Link to the code that reproduces this issue
https://github.com/leandroluk/bug-next15-debug-with-turbopack
### To Reproduce
1. Install dependencies
2. Select "turbo" in "Run and Debug" menu
3. Select "Browser Debug" in running bar and this error message will appear:

4. When select "webpack" in "Run and Debug" menu and look for "Browser Debug" the error doesnt appear and any breakpoint works.
### Current vs. Expected behavior
Are expected to debug works like webpack.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Home Single Language
Available memory (MB): 32453
Available CPU cores: 20
Binaries:
Node: 20.17.0
npm: 10.9.0
Yarn: 1.22.22
pnpm: 9.11.0
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: 15.0.3
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Turbopack | low | Critical |
2,657,441,657 | stable-diffusion-webui | [Feature Request]: New fork for support of intel processors | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
I have an intel pc and would like to be able to use the newest versions of WebUI, the current fork for intel processors is stuck at 1.6.0 and doesn't seem like it will be updated again. I was wondering if someone could help by making a new one for the current version. I would do this myself, but I have absolutely no idea what I'm doing. My PC specs are as follows if it's helpful to you.
Processor: 12th Gen Intel(R) Core(TM) i5-12500H 2.50 GHz
Installed RAM: 8.00 GB (7.68 GB usable)
System type: 64-bit operating system, x64-based processor
Edition: Windows 11 Pro
Version: 23H2
Installed on: 10/11/2022
OS build: 22631.4391
Experience: Windows Feature Experience Pack 1000.22700.1047.0
### Proposed workflow
Just a version for intel processers that will actually be updated.
### Additional information
_No response_ | enhancement | low | Minor |
2,657,590,844 | ui | [bug]: Passing in a custom component into the popover component does not behave as expected | ### Describe the bug
I've create a custom component `SidebarPopover` whose props include `popoverTrigger` of type `React.ReactNode`. Component essentially looks like this:
```tsx
<Popover>
<PopoverTrigger asChild>
{popoverTrigger}
</PopoverTrigger>
<PopoverContent className="w-80 h-80">
{popoverContent}
</PopoverContent>
</Popover>
```
I've been attempting to pass in a custom component that I created, `SidebarButton`, into my custom `SidebarPopover` component through the `popoverTrigger` prop. `SidebarButton` looks something like this:
```tsx
<Button className="max-w-fit h-18 rounded-none p-2 m-1">
<Image src={imageIcon} alt={alt} width={50} height={50} />
</Button>
```
Passing in `SidebarButton` into `SidebarPopover` does not display the popover as expected. HOWEVER, if I directly pass in:
```tsx
const popover = <Button className="max-w-fit h-18 rounded-none p-2 m-1"><Image src={button.image} alt={button.alt} width={50} height={50}/></Button>
```
into my custom `SidebarPopover`, the popover appears?!? I would expect that passing in the above, and the `SidebarButton` component should yield the same behavior, but this is not the case.
In both cases `Button` is the shadcn ui component and `Image` is from Next.js' libraries.
### Affected component/components
Popover
### How to reproduce
1. Create the sidebar popover custom component mentioned in the description
2. Create the custom sidebar button mentioned in the description above
3. Pass in the custom sidebar button into the sidebar popover custom component as a prop
4. Observe that no popover appears when clicking on the sidebar button
5. Pass in the JSX directly into the sidebar popover custom component and observe that the popover now appears
### Codesandbox/StackBlitz link
https://codesandbox.io/p/github/carolinar7/react-practice/shadcn-bug-popover-bug-report
### Logs
_No response_
### System Info
```bash
Windows 11, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,657,683,123 | ant-design | form.list 中form.item 单独使用layout="vertical" 会失去高度 | ### Reproduction link
[](https://stackblitz.com/edit/vitejs-vite-zrnbko?file=src%2FApp.tsx)
### Steps to reproduce
form.list 中form.item 单独使用layout="vertical" 会失去高度
### What is expected?
正常撑开
### What is actually happening?
item部分失去高度
| Environment | Info |
| --- | --- |
| antd | 5.22.1 |
| React | 18 |
| System | win10 |
| Browser | edge |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,Inactive | low | Major |
2,657,693,901 | tauri | [bug] Socket Io doesn't work in tauri+angular app with ALB node socket server | ### Describe the bug
Getting socket Id unknown
Getting error only in mac.
working fine for linux and windows
Issue with ALB sticky session
On mac it doesn't allow cookies

### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.1.0 x86_64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-apple-darwin (default)
- node: 18.20.4
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.0.6
- tauri-build 🦀: 2.0.2
- wry 🦀: 0.46.3
- tao 🦀: 0.30.5
- @tauri-apps/api : 2.0.3 (outdated, latest: 2.1.1)
- @tauri-apps/cli : 2.0.5 (outdated, latest: 2.1.0)
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
- tauri-plugin-log 🦀: 2.0.2
- @tauri-apps/plugin-log : not installed!
- tauri-plugin-fs 🦀: 2.0.3
- @tauri-apps/plugin-fs : not installed!
- tauri-plugin-dialog 🦀: 2.0.3
- @tauri-apps/plugin-dialog : 2.0.1
- tauri-plugin-process 🦀: 2.0.1
- @tauri-apps/plugin-process : 2.0.0
- tauri-plugin-updater 🦀: 2.0.2
- @tauri-apps/plugin-updater : 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist/cooper
- devUrl: http://localhost:4200/
- framework: Angular
- bundler: Webpack
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: upstream,platform: macOS,status: needs triage | low | Critical |
2,657,703,098 | rust | #[derive(Debug)] on #[repr(packed)] enum causes internal compiler error | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
#[derive(Debug)]
#[repr(packed)]
enum COption<T> {
None,
Some(T),
}
fn main() {
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15) (Fedora 1.82.0-1.fc40)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 18.1.8
```
### Error output
```
error: internal compiler error: compiler/rustc_mir_transform/src/check_packed_ref.rs:49:21: builtin derive created an unaligned reference
--> src/main.rs:5:10
|
1 | #[derive(Debug)]
| ----- in this derive macro expansion
...
5 | Some(T),
| ^
|
= note: this error: internal compiler error originates in the derive macro `Debug` (in Nightly builds, run with -Z macro-backtrace for more info)
thread 'rustc' panicked at compiler/rustc_mir_transform/src/check_packed_ref.rs:49:21:
Box<dyn Any>
stack backtrace:
0: 0x7fd20cb85b78 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h1ceeeb1b55e290c2
1: 0x7fd20c582b9b - core::fmt::write::hb3a48913f78b0dae
2: 0x7fd20cb78ba3 - <unknown>
3: 0x7fd20cb887ea - <unknown>
4: 0x7fd20cb8842c - std::panicking::default_hook::h49089136d7ad7532
5: 0x7fd20a1425ad - <unknown>
6: 0x7fd20cb891e6 - std::panicking::rust_panic_with_hook::ha43241025fb228f0
7: 0x7fd20a18c221 - <unknown>
8: 0x7fd20a17c246 - <unknown>
9: 0x7fd20a17bfc6 - <unknown>
10: 0x7fd20a19a8e1 - <rustc_errors[9a66caa3f8a65bb2]::diagnostic::BugAbort as rustc_errors[9a66caa3f8a65bb2]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
11: 0x7fd20a92c36d - <unknown>
12: 0x7fd20a96b188 - <unknown>
13: 0x7fd20a96bada - <unknown>
14: 0x7fd20a9570db - <unknown>
15: 0x7fd20a94dd57 - <unknown>
16: 0x7fd20be2d539 - <unknown>
17: 0x7fd20bf150eb - rustc_mir_transform[ec94f2ac466ce048]::mir_built
18: 0x7fd20c1c6467 - <unknown>
19: 0x7fd20ac4e640 - <unknown>
20: 0x7fd20ac3cb28 - <unknown>
21: 0x7fd20ad4ed98 - <unknown>
22: 0x7fd20bd63e17 - rustc_mir_build[b87ccf4b6ee18353]::check_unsafety::check_unsafety
23: 0x7fd20c1b87f7 - <unknown>
24: 0x7fd20ac4dd90 - <unknown>
25: 0x7fd20ac3714e - <unknown>
26: 0x7fd20ad5d45f - <unknown>
27: 0x7fd20baa8cbb - <unknown>
28: 0x7fd20c8bd509 - rustc_interface[73b4cfe22fadb0e6]::passes::analysis
29: 0x7fd20ca728b7 - <unknown>
30: 0x7fd20ac4d70f - <unknown>
31: 0x7fd20ac06129 - <unknown>
32: 0x7fd20ad49194 - <unknown>
33: 0x7fd20c7b0ee4 - <unknown>
34: 0x7fd20c7ad945 - <unknown>
35: 0x7fd20c7b3dd0 - <unknown>
36: 0x7fd20cb952ab - <unknown>
37: 0x7fd2094a66d7 - start_thread
38: 0x7fd20952a60c - __clone3
39: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.82.0 (f6e511eec 2024-10-15) (Fedora 1.82.0-1.fc40) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [mir_built] building MIR for `<impl at src/main.rs:1:10: 1:15>::fmt`
#1 [check_unsafety] unsafety-checking `<impl at src/main.rs:1:10: 1:15>::fmt`
end of query stack
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
error: internal compiler error: compiler/rustc_mir_transform/src/check_packed_ref.rs:49:21: builtin derive created an unaligned reference
--> src/main.rs:5:10
|
1 | #[derive(Debug)]
| ----- in this derive macro expansion
...
5 | Some(T),
| ^
|
= note: this error: internal compiler error originates in the derive macro `Debug` (in Nightly builds, run with -Z macro-backtrace for more info)
thread 'rustc' panicked at compiler/rustc_mir_transform/src/check_packed_ref.rs:49:21:
Box<dyn Any>
stack backtrace:
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.82.0 (f6e511eec 2024-10-15) (Fedora 1.82.0-1.fc40) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [mir_built] building MIR for `<impl at src/main.rs:1:10: 1:15>::fmt`
#1 [check_unsafety] unsafety-checking `<impl at src/main.rs:1:10: 1:15>::fmt`
#2 [analysis] running analysis passes on this crate
end of query stack
```
</p>
</details>
| I-ICE,A-macros,T-compiler,C-bug,E-needs-bisection,A-repr-packed | low | Critical |
2,657,723,893 | deno | `deno check` does not respect top level `exclude` in `deno.json` | Version: Deno 2.0.6
Running `deno check .` in the project root directory checks files within a directory `foo/bar` specified in `exclude`. `deno fmt` and `deno lint` respect the `exclude` configuration.
Running `deno check foo/bar` where `foo/bar/` is the excluded directory will return a warning that no matching files were found.
TLDR:
- `foo/bar/` in top level `exclude` in `deno.json`
- `deno check .` reports errors in `foo/bar/baz.ts`
- `deno check foo/bar` returns warning that no matching files were found. | needs investigation,tsc,workspaces | low | Critical |
2,657,729,489 | ant-design | vertical布局的Form的labelCol.offset无效 | ### Reproduction link
[](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-sdgg8r)
### Steps to reproduce
直接查看效果
### What is expected?

### What is actually happening?
未对齐
| Environment | Info |
| --- | --- |
| antd | 5.22.1 |
| React | 18 |
| System | windows11 |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,help wanted | low | Major |
2,657,757,366 | vscode | Git - Expose `Repository#getEmptyTree()` in Git extension API | This would allow extensions that depend on the Git extension to accurately handle getting changed files in commits with no parent, e.g. Copilot Chat which uses the Git extension to compute related files for the Copilot Edits working set. | feature-request,git | low | Minor |
2,657,819,156 | angular | Missleading zoneless documentation | ### Describe the problem that you experienced
[In https://github.com/angular/angular/blob/190b4d7763e2953b63b478cc749846a5d5423795/adev/src/content/guide/zoneless.md?plain=1#L61](https://github.com/angular/angular/blob/190b4d7763e2953b63b478cc749846a5d5423795/adev/src/content/guide/zoneless.md?plain=1#L61C1-L62C448) it says:
"**When a library component is a host for user-components which might use `ChangeDetectionStrategy.Default`, it cannot use `OnPush` because that would prevent the child component from being refreshed if it is not `OnPush` compatible and relies on ZoneJS to trigger change detection.**"
In my understanding, an app developer will use my library component in a template he has control of. This means, the app developer can determine the change detection strategy himself.
My library component can host the content of the app developer in form of content projection or by providing a template. In both cases, the information has to be provided by the app developer, in a context he has control of. The template and/or projected content will be parsed by angular in the context it was "written in" by the app developer.
Imagine the following situation:
Component A `Default`, our root component.
Component B `Default`, a component that relies on zone.js for change detection
Component C `OnPush`, projects content or prints a template.
Component A.html:
`
<component-c>
<ng-template #template>
<component-b></component-b>
</ng-template>
<component-b></component-b>
</component-c>
`
This will work as long as Component A is set to `Default`.
See https://github.com/fl-mueller/zonelessTest/tree/angular-issue-showcase
### Enter the URL of the topic with the problem
_No response_
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
``` | P5,area: docs | low | Critical |
2,657,853,994 | pytorch | [inductor][cpu]maml fp32/amp_fp16 performance regression in 2024-11-11 nightly release | ### 🐛 Describe the bug
<p>fp32 static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>single</td>
<td>1</td>
<td>1.003519</td>
<td>0.098442143</td>
<td>0.098788560901217</td>
<td>79.571829</td>
<td>1</td>
<td>1.154072</td>
<td>0.08604004500000001</td>
<td>0.09929640681324</td>
<td>78.661007</td>
<td>0.87</td>
<td>1.01</td>
<td>0.87</td>
<td>0.99</td>
</tr>
</tbody>
</table>
<p>fp32 dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>single</td>
<td>1</td>
<td>1.003255</td>
<td>0.098474461</td>
<td>0.098794995370555</td>
<td>79.525869</td>
<td>1</td>
<td>1.156549</td>
<td>0.085749273</td>
<td>0.099173235938877</td>
<td>78.572847</td>
<td>0.87</td>
<td>1.0</td>
<td>0.87</td>
<td>0.99</td>
</tr>
</tbody>
</table>
<p>amp fp16 static default</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>multiple</td>
<td>1</td>
<td>1.015477</td>
<td>0.1186938</td>
<td>0.1205308239426</td>
<td>90.049904</td>
<td>1</td>
<td>1.193928</td>
<td>0.10232684</td>
<td>0.12217087942752002</td>
<td>90.818413</td>
<td>0.85</td>
<td>1.01</td>
<td>0.86</td>
<td>1.01</td>
</tr>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>single</td>
<td>1</td>
<td>1.086367</td>
<td>0.98123772</td>
<td>1.06598427816324</td>
<td>160.030677</td>
<td>1</td>
<td>8.338911</td>
<td>0.13361106</td>
<td>1.11417073795566</td>
<td>161.398547</td>
<td>0.13</td>
<td>1.05</td>
<td>0.14</td>
<td>1.01</td>
</tr>
</tbody>
</table>
the bad commit fe4fa1df9fa981f31f85170e40a754479759267f
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench maml float32 first static cpp
Testing with cpp wrapper.
Testing with inductor.
single-thread testing....
loading model: 0it [00:00, ?it/s]
cpu eval maml
running benchmark: 100%|█████████████████████████████████████████████████████| 50/50 [00:09<00:00, 5.06it/s]
0.998x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,maml,1,0.998400,98.619239,80.780985,0.705929,46.189773,65.431142,435,18,14,3,0,0,0
```
the good commit fdfd4c50bad8b7d12416d45233ad990c45cf7ef9
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench maml float32 first static cpp
Testing with cpp wrapper.
Testing with inductor.
single-thread testing....
loading model: 0it [00:00, ?it/s]
cpu eval maml
running benchmark: 100%|█████████████████████████████████████████████████████| 50/50 [00:09<00:00, 5.42it/s]
1.157x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,maml,1,1.157074,85.184368,21.318866,0.743438,46.032486,61.918413,435,18,14,3,0,0,0
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>766a5e3a</td>
<td>main</td>
<td>e522b45c</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>5ef33e40b3c3fd2608552d3301c7255826c0e7f6</td>
<td>main</td>
<td>f121eab0182f7da58b39ffb84744bdc7109817e3</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.5.0a0+fa44bda</td>
<td>main</td>
<td>2.5.0a0+fa44bda</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh single inference performance torchbench maml float32 first static cpp
Suspected guilty commit: fe4fa1df9fa981f31f85170e40a754479759267f
[torchbench-maml-inference-float32-static-cpp-single-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/17744695/torchbench-maml-inference-float32-static-cpp-single-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| oncall: pt2,oncall: cpu inductor | low | Critical |
2,657,906,378 | deno | question: confusion about adding developer/dev dependencies | I am not quite sure how Deno knows if the dependencies I am adding is a dev dependency or not. Here is a reproduction.
```bash
deno init dev-dependency-test
cd dev-dependency-test
deno add --dev jsr:@denosaurs/argontwo
```
But there is no actual separation of which is a dev dependency in `deno.json`.
```json
{
"tasks": {
"dev": "deno run --watch main.ts"
},
"imports": {
"@denosaurs/argontwo": "jsr:@denosaurs/argontwo@^0.2.0",
"@std/assert": "jsr:@std/assert@1"
}
}
```
And in `deno.lock`.
```json
{
"version": "4",
"specifiers": {
"jsr:@denosaurs/argontwo@0.2": "0.2.0",
"jsr:@denosaurs/lz4@0.1.4": "0.1.4",
"jsr:@std/assert@1": "1.0.8",
"jsr:@std/encoding@0.221": "0.221.0",
"jsr:@std/internal@^1.0.5": "1.0.5"
},
"jsr": {
"@denosaurs/argontwo@0.2.0": {
"integrity": "1ce2f4c90ba3643e6fffd0d9be059f7cacfb62cf1b314049b6c7c71d87cb92a1",
"dependencies": [
"jsr:@denosaurs/lz4",
"jsr:@std/encoding"
]
},
"@denosaurs/lz4@0.1.4": {
"integrity": "ad5d556c02eb01fe1e0f2e953d7be066a14870afe149b1aed1ced019460f6aa1"
},
"@std/assert@1.0.8": {
"integrity": "ebe0bd7eb488ee39686f77003992f389a06c3da1bbd8022184804852b2fa641b",
"dependencies": [
"jsr:@std/internal"
]
},
"@std/encoding@0.221.0": {
"integrity": "d1dd76ef0dc5d14088411e6dc1dede53bf8308c95d1537df1214c97137208e45"
},
"@std/internal@1.0.5": {
"integrity": "54a546004f769c1ac9e025abd15a76b6671ddc9687e2313b67376125650dc7ba"
}
},
"workspace": {
"dependencies": [
"jsr:@denosaurs/argontwo@0.2",
"jsr:@std/assert@1"
]
}
}
```
In Node, there is a field called `devDependencies` which makes it easy to track which dependency is a dev dependency or not. Maybe this `--dev` flag was supposed to declare a package as a dev dependency in `deno.lock`. Was this overlooked or something that is still being worked on? | install,triage required 👀 | medium | Major |
2,657,926,768 | go | net: TestLookupNoSuchHost/LookupSRV_NXDOMAIN/forced_cgo_resolver failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestLookupNoSuchHost/LookupSRV_NXDOMAIN/forced_cgo_resolver"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731319005757721169)):
=== RUN TestLookupNoSuchHost/LookupSRV_NXDOMAIN/forced_cgo_resolver
lookup_test.go:1599: IsNotFound is set to false
lookup_test.go:1603: error message is not equal to: no such host
lookup_test.go:1612: backoff 1s after failure lookup _unknown._tcp.invalid.invalid. on 8.8.8.8:53: read udp 162.231.98.115:15878->8.8.8.8:53: i/o timeout
lookup_test.go:1599: IsNotFound is set to false
lookup_test.go:1603: error message is not equal to: no such host
lookup_test.go:1612: backoff 5s after failure lookup _unknown._tcp.invalid.invalid. on 8.8.8.8:53: read udp 162.231.98.115:1702->8.8.8.8:53: i/o timeout
lookup_test.go:1599: IsNotFound is set to false
lookup_test.go:1603: error message is not equal to: no such host
lookup_test.go:1612: backoff 30s after failure lookup _unknown._tcp.invalid.invalid. on 8.8.8.8:53: read udp 162.231.98.115:19387->8.8.8.8:53: i/o timeout
lookup_test.go:1599: IsNotFound is set to false
lookup_test.go:1603: error message is not equal to: no such host
lookup_test.go:1617: unexpected error: lookup _unknown._tcp.invalid.invalid. on 8.8.8.8:53: read udp 162.231.98.115:47538->8.8.8.8:53: i/o timeout
--- FAIL: TestLookupNoSuchHost/LookupSRV_NXDOMAIN/forced_cgo_resolver (76.13s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,657,939,467 | godot | 3D Navigation Scheme Takes alot of time to set the proper shortcut for Orbit, Pan, and Zoom mouse buttons | ### Tested versions
Reproducable in: 4.4.dev[277cb68e1]
### System information
Godot v4.4.dev (277cb68e1) - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 Ti (NVIDIA; 32.0.15.6603) - 11th Gen Intel(R) Core(TM) i5-11400F @ 2.60GHz (12 threads)
### Issue description
If you go to Editor settings and change editors/3d/navigation/navigation_scheme, Orbit Mouse Button, Pan Mouse Button, and Zoom Mouse Button all change their mouse buttons first, then after saving change their key modifiers.
https://github.com/user-attachments/assets/fa143a1c-2353-41ce-8142-80bd1f71194e
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,usability,topic:input,topic:3d | low | Minor |
2,657,954,430 | godot | Can't set 3D Navigation Mouse Buttons modifiers in Editor Settings | ### Tested versions
4.4.dev.custom_build [76fa7b291]
### System information
Godot v4.4.dev (76fa7b291) - Windows 10.0.19045 - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - NVIDIA GeForce GTX 1080 Ti (NVIDIA; 32.0.15.6603) - 11th Gen Intel(R) Core(TM) i5-11400F @ 2.60GHz (12 threads)
### Issue description
User is unable to set the 3D Navigation mouse button modifier keys and is only able to change them according to the Default Navigation Schemes.
https://github.com/user-attachments/assets/d9dec110-c895-428d-b7a6-690ab2fcaf43
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,usability,topic:input | low | Minor |
2,657,958,048 | pytorch | Segmentation fault (core dumped) in `torch.autograd.profiler.profile` | ### 🐛 Describe the bug
torch.autograd.profiler.profile triggered a crash when the gpu is available.
minimal example:
```
import threading
import torch
from torch.autograd.profiler import profile
def multi_threaded_profiler():
with profile() as prof:
torch.add(1, 1)
torch.mul(1, 1)
def test_multithread_profiler_crash(self):
threads = []
for _ in range(10):
t = threading.Thread(target=multi_threaded_profiler)
threads.append(t)
t.start()
for t in threads:
t.join()
test_multithread_profiler_crash(None)
```
outputs:
```
ERROR: External init callback must run in same thread as registerClient (1040168512 != -1250609344)
WARNING:2024-11-15 09:40:15 660751:660824 init.cpp:178] function cbapi->getCuptiStatus() failed with error CUPTI_ERROR_MULTIPLE_SUBSCRIBERS_NOT_SUPPORTED (39)
ERROR: External init callback must run in same thread as registerClient (1446118976 != -1250609344)
WARNING:2024-11-15 09:40:15 660751:660829 init.cpp:178] function cbapi->getCuptiStatus() failed with error CUPTI_ERROR_MULTIPLE_SUBSCRIBERS_NOT_SUPPORTED (39)WARNING:2024-11-15 09:40:15 660751:660826 init.cpp:178] function cbapi->getCuptiStatus() failed with error CUPTI_ERROR_MULTIPLE_SUBSCRIBERS_NOT_SUPPORTED (39)
WARNING:2024-11-15 09:40:15 660751:660827 init.cpp:178] function cbapi->getCuptiStatus() failed with error CUPTI_ERROR_MULTIPLE_SUBSCRIBERS_NOT_SUPPORTED (39)
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_
tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x
2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanc
ed tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni
avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_ep
p hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm m
d_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | triage review,oncall: profiler | low | Critical |
2,657,992,319 | storybook | [Bug]: Tags filter visual bug when there are empty value tags | ### Describe the bug
If a story has tags like so:
```ts
export const MyStory = {
tags: ['']
}
```
It results in those `''` being indexed and are shown like this in the UI:
<image src="https://github.com/user-attachments/assets/8c72d9e6-59e7-4dcf-91d0-c3b9de69f3fb" width="350"/>
### Reproduction link
https://stackblitz.com/edit/github-yjmnux?file=src%2Fstories%2FButton.stories.js&preset=node
### Reproduction steps
_No response_
### System
```bash
-
```
### Additional context
_No response_ | bug,tags | low | Critical |
2,658,012,586 | flutter | [ios] Flutter view can't refresh in Pip when app in background. | ### Steps to reproduce
use the project
https://github.com/ahyangnb/flutter_pip
running in ios and click the `Open` button to displaying PIP, and then go back to home page in ios system,
### Expected results
refresh the pip content as new flutter widget.
### Actual results
print error
```
Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
```
all the time in xcode.
### Code sample
```swift
@objc public func setDisplayText(name: String, msg: String) {
let conContent = name + ":" + msg
print("setDisplayText::receive::" + conContent)
let oldContent = delegate!.flutterController!.view
// Remove the existing flutterController view from its superview
delegate!.flutterController!.view = nil
/// Tested adding it as a new Label and it worked perfectly [even if it was changed midway through the background]
let nativeLabel = UILabel()
nativeLabel.text = "New Content"
nativeLabel.textColor = UIColor.red
nativeLabel.textAlignment = .center
nativeLabel.frame = CGRect(x: 0, y: 0, width: 180, height: 48.0)
delegate!.flutterController!.view = nativeLabel
/// Trying to reassign to the old version of the content results in: Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
delegate!.flutterController!.view = oldContent
// The final add operation.
firstWindow?.addSubview(delegate!.flutterController!.view)
}
```
`delegate!.flutterController!.view = nativeLabel `
will be totally normal, and the
`delegate!.flutterController!.view = oldContent` error.
i've been ask to chat gpt and the anwser is
```
The issue here appears to be related to restrictions that iOS imposes on rendering in the background. When your app is in the background, iOS limits GPU work (such as rendering and managing views) to save resources, and this impacts `FlutterViewController` because Flutter heavily relies on GPU resources for rendering. Here’s what’s happening in detail:
1. **Native `UILabel` works in the background**: The line `delegate!.flutterController!.view = nativeLabel` succeeds because `UILabel` is a lightweight, UIKit-native component that does not require GPU resources for basic rendering. iOS allows UIKit components like `UILabel` to refresh even in the background, as these are handled by the CPU.
2. **Flutter’s `FlutterViewController` fails in the background**: The line `delegate!.flutterController!.view = oldContent` fails because reassigning a `FlutterViewController`'s view involves a GPU-rendered context that iOS does not permit in the background. When your app is in the background, the system throws an error (kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted) because the app does not have permission to submit GPU work, which is required for Flutter views.
3. **Flutter views and background limitations**: Flutter views are managed by a Metal or OpenGL context in iOS, both of which cannot render when the app is not in the foreground. Attempting to reassign the `FlutterViewController`'s view while in the background causes an `EXC_BAD_ACCESS` error because the GPU resources needed for Flutter views are restricted.
### Potential Workarounds
If you need to update a Flutter view while the app is in the background, consider these strategies:
1. **Avoid assigning Flutter views in the background**: Instead, defer any Flutter view updates until the app returns to the foreground. You can listen for `UIApplication.didBecomeActiveNotification` to detect when the app returns to the foreground and safely reassign the view at that point.
2. **Send background data updates to Flutter when it becomes active**: If you only need to pass data (not a view) to Flutter, you can save data in a shared state or UserDefaults and notify Flutter to reload this data when the app is active again.
3. **Use a Flutter plugin or platform channel**: Send necessary updates to Flutter through a plugin or platform channel while the app is in the background, and let Flutter update its view when it is in the foreground.
These approaches should help avoid GPU-related crashes and allow your app to handle background updates more effectively.
```
link https://chatgpt.com/share/6735b595-17e4-8002-a807-c7ad68f73f9a.
relate apple bbs link https://forums.developer.apple.com/forums/thread/76818
relate doc https://developer.apple.com/library/archive/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/ImplementingaMultitasking-awareOpenGLESApplication/ImplementingaMultitasking-awareOpenGLESApplication.html#//apple_ref/doc/uid/TP40008793-CH5-SW2
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
```
zengyang@zengyangdeMac-mini ~ % flutter doctor -v
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.0 23A344 darwin-arm64, locale
en-CN)
• Flutter version 3.24.3 on channel stable at
/Users/zengyang/fvm/versions/3.24.3
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (9 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
• Flutter download mirror https://storage.flutter-io.cn
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/zengyang/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /Users/zengyang/Library/Android/sdk
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.3)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15E204a
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (5 available)
• 苹果11啊 (mobile) • 00008030-0019082236E2802E • ios
• iOS 17.6.1 21G93
• qqqqqqq1iPhone (mobile) • 00008030-000171CE1168802E • ios
• iOS 17.6.1 21G93
• macOS (desktop) • macos • darwin-arm64
• macOS 14.0 23A344 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin
• macOS 14.0 23A344 darwin-arm64
• Chrome (web) • chrome •
web-javascript • Google Chrome 130.0.6723.117
! Error: Browsing on the local area network for iPhone 11 Pro Max. Ensure
the device is unlocked and attached with a cable or associated with the
same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
[✓] Network resources
• All expected network resources are available.
• No issues found!
``` | platform-ios,engine,a: platform-views,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.27 | low | Critical |
2,658,077,152 | pytorch | “Can't swap an already initialized allocator” when calling torch.cuda.memory.change_current_allocator | ### 🐛 Describe the bug
I write an example based on the one from doc: https://pytorch.org/docs/stable/notes/cuda.html#cuda-memory-management
The reason I found out is when lazy init is already done, for example "torch.cuda.get_device_capability()", change_current_allocator will raise such error.
```
import torch
from torch.utils.cpp_extension import load_inline
dummy_allocator_source = """
#include <sys/types.h>
#include <cuda.h>
#include <cuda_runtime_api.h>
#include <stdio.h>
#include <iostream>
extern "C" {
void* my_malloc(ssize_t size, int device, cudaStream_t stream) {
void *ptr;
cudaMalloc(&ptr, size);
std::cout<<"alloc "<<ptr<<size<<std::endl;
return ptr;
}
void my_free(void* ptr, ssize_t size, int device, cudaStream_t stream) {
std::cout<<"free "<<ptr<< " "<<stream<<std::endl;
cudaFree(ptr);
}
}
"""
# Add with_cuda=True to explicitly enable CUDA support
dummy_allocator = load_inline(
name="dummy_allocator",
cpp_sources=[dummy_allocator_source],
extra_ldflags=["-lcudart"], # Link with the CUDA runtime library
extra_cflags=['-O3'],
is_python_module=False,
keep_intermediates=False,
verbose=True,
with_cuda=True, # Explicitly enable CUDA support
)
allocator = torch.cuda.memory.CUDAPluggableAllocator(
dummy_allocator,
"my_malloc",
"my_free",
)
torch.cuda.memory.change_current_allocator(allocator)
```
```
Using /root/.cache/torch_extensions/py310_cu121 as PyTorch extensions root...
No modifications detected for re-loaded extension module dummy_allocator_v4, skipping build step...
Loading extension module dummy_allocator_v4...
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-20-a80e91a588c3>](https://localhost:8080/#) in <cell line: 43>()
41 )
42
---> 43 torch.cuda.memory.change_current_allocator(allocator)
[/usr/local/lib/python3.10/dist-packages/torch/cuda/memory.py](https://localhost:8080/#) in change_current_allocator(allocator)
960 See :ref:`cuda-memory-management` for details on creating and using a custom allocator
961 """
--> 962 torch._C._cuda_changeCurrentAllocator(allocator.allocator())
963
964
RuntimeError: Can't swap an already initialized allocator
```
### Versions
PyTorch version: 2.5.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.36
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.3.3
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvtx==0.2.10
[pip3] optree==0.13.0
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.0+cu121
[pip3] torchaudio==2.5.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0+cu121
[conda] Could not collect
cc @ptrblck @msaroufim | module: cuda,triaged,module: CUDACachingAllocator | low | Critical |
2,658,083,889 | rust | Renaming targets might be a breaking change | Hi all,
I am unsure if this is the right place for this sort of issue and apologize if it isn't. I would also like to stress that I fully understand this issue is not currently actionable, but I want to bring it to the attention of the maintainers in case this is a blind spot. If this has been discussed to death elsewhere (that I could not find) I also apologize for the extra noise.
Recently, through compiler warnings I have been made aware of: https://blog.rust-lang.org/2024/04/09/updates-to-rusts-wasi-targets.html#renaming-wasm32-wasi-to-wasm32-wasip1
Namely, renaming the `wasm32-wasi` target to `wasm32-wasip1`. In Zellij, we bundle minor applications that we call plugins - compiled to this target - within our executable. This is how we compile and distribute our software and represents a compromise of various factors. Renaming this target represents a breaking change for us. It means we will no longer be able to compile the same project for the newer rust version, and that if we upgrade our toolchain, we will no longer be able to compile the same project for a (much) older version.
This is not a big deal for us. Making the change should be a trivial search/replace, and we don't absolutely have to support much older toolchains. However, as a user of the language I assumed (perhaps mistakenly) that targets are an external user-facing API. That any non-backwards-compatible changes to them would be considered breaking changes and should only be expected in major version bumps. Since this was not the case here, it gets me a little worried that perhaps either I do not understand the breaking changes policy (even after reading the relevant RFC) or there is some sort of miscommunication going on between the language developers and at least some of its users. Or - of course - this is a blind spot.
In case this is the latter, I wanted to bring it to the attention of the maintainers. Not for this change of course - as I understand it's already very much underway - but at least for future changes.
Thanks for all the work everyone is doing. | T-compiler,C-discussion,A-targets,O-wasi | medium | Minor |
2,658,130,646 | PowerToys | Add Show/Hide Window Toggle to Keyboard Manager -> Remap Shortcut -> Run Program -> If running, | ### Description of the new feature / enhancement
In the Keyboard manager, a remapped shortcut can have a setting of "Show Window" (Keyboard Manager -> Remap Shortcut -> Run Program -> If running)
It would be great to be able to press a second time and make it go away for ease of use.
### Scenario when this would be used?
If the window is left running in the background, the scenario can be that you're swapping in and out of the app. On a laptop you may not leave the app at the forefront of the screen, so having the option to get rid of it quickly (not to interrupt your flow) would be useful.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,658,211,085 | pytorch | [inductor][cpu]opacus_cifar10 and functorch_dp_cifar10 AMP performance regression in 2024-11-11 nightly release | ### 🐛 Describe the bug
<p>amp static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.783096</td>
<td>0.010444737999999999</td>
<td>0.008179232548848</td>
<td>13.042257</td>
<td>64</td>
<td>0.978642</td>
<td>0.007060253000000001</td>
<td>0.006909460116426001</td>
<td>13.063731</td>
<td>0.8</td>
<td>0.84</td>
<td>0.68</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.789183</td>
<td>0.011084602</td>
<td>0.008747779460166</td>
<td>13.726767</td>
<td>64</td>
<td>1.230847</td>
<td>0.00693516</td>
<td>0.00853612088052</td>
<td>13.672738</td>
<td>0.64</td>
<td>0.98</td>
<td>0.63</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>single</td>
<td>1</td>
<td>5.81558</td>
<td>0.001893122</td>
<td>0.011009602440759998</td>
<td>12.406509</td>
<td>1</td>
<td>6.771293</td>
<td>0.00166472</td>
<td>0.01127230688296</td>
<td>12.353326</td>
<td>0.86</td>
<td>1.02</td>
<td>0.88</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>single</td>
<td>1</td>
<td>5.713856</td>
<td>0.001934385</td>
<td>0.011052797338559999</td>
<td>13.00355</td>
<td>1</td>
<td>6.524588</td>
<td>0.0016443719999999998</td>
<td>0.010728849818735998</td>
<td>12.954403</td>
<td>0.88</td>
<td>0.97</td>
<td>0.85</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>AMP dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.728946</td>
<td>0.011348397</td>
<td>0.008272368599562</td>
<td>15.815295</td>
<td>64</td>
<td>1.025937</td>
<td>0.008005277</td>
<td>0.008212909869549001</td>
<td>15.731289</td>
<td>0.71</td>
<td>0.99</td>
<td>0.71</td>
<td>0.99</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.785165</td>
<td>0.011229663</td>
<td>0.008817138349395001</td>
<td>16.15449</td>
<td>64</td>
<td>1.053135</td>
<td>0.008097649</td>
<td>0.008527917579615</td>
<td>16.093056</td>
<td>0.75</td>
<td>0.97</td>
<td>0.72</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>single</td>
<td>1</td>
<td>5.796491</td>
<td>0.0018960539999999999</td>
<td>0.010990459946513998</td>
<td>12.354943</td>
<td>1</td>
<td>6.658744</td>
<td>0.00161524</td>
<td>0.01075546965856</td>
<td>12.383582</td>
<td>0.87</td>
<td>0.98</td>
<td>0.85</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>single</td>
<td>1</td>
<td>5.785104</td>
<td>0.001917703</td>
<td>0.011094111296112</td>
<td>13.000144</td>
<td>1</td>
<td>6.664832</td>
<td>0.001649579</td>
<td>0.010994166905728</td>
<td>12.966551</td>
<td>0.87</td>
<td>0.99</td>
<td>0.86</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>AMP dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.730634</td>
<td>0.022485953</td>
<td>0.016429001784202</td>
<td>35.01237</td>
<td>64</td>
<td>0.974152</td>
<td>0.017241956</td>
<td>0.016796285921312</td>
<td>37.339013</td>
<td>0.75</td>
<td>1.02</td>
<td>0.77</td>
<td>1.07</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.779876</td>
<td>0.022321979999999998</td>
<td>0.01740837647448</td>
<td>33.038597</td>
<td>64</td>
<td>1.009547</td>
<td>0.016893505</td>
<td>0.017054787292235</td>
<td>28.392059</td>
<td>0.77</td>
<td>0.98</td>
<td>0.76</td>
<td>0.86</td>
</tr>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>single</td>
<td>1</td>
<td>5.055158</td>
<td>0.0031064919999999998</td>
<td>0.015703807885736</td>
<td>58.557283</td>
<td>1</td>
<td>5.870757</td>
<td>0.0026807479999999997</td>
<td>0.015738020086236</td>
<td>63.621082</td>
<td>0.86</td>
<td>1.0</td>
<td>0.86</td>
<td>1.09</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>single</td>
<td>1</td>
<td>5.138824</td>
<td>0.003107214</td>
<td>0.015967425876336</td>
<td>24.318755</td>
<td>1</td>
<td>6.165533</td>
<td>0.00269438</td>
<td>0.01661228880454</td>
<td>27.084622</td>
<td>0.83</td>
<td>1.04</td>
<td>0.87</td>
<td>1.11</td>
</tr>
</table>
<p>AMP static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.866286</td>
<td>0.02582328</td>
<td>0.02237034593808</td>
<td>44.298414</td>
<td>64</td>
<td>1.178419</td>
<td>0.01879288</td>
<td>0.022145886856720004</td>
<td>44.466475</td>
<td>0.74</td>
<td>0.99</td>
<td>0.73</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>opacus_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>0.947846</td>
<td>0.02319824</td>
<td>0.021988358991039996</td>
<td>42.643621</td>
<td>64</td>
<td>1.194638</td>
<td>0.018860679999999998</td>
<td>0.02253168503384</td>
<td>42.469943</td>
<td>0.79</td>
<td>1.02</td>
<td>0.81</td>
<td>1.0</td>
</tr>
</tbody>
</table>
the bad commit a766d84a3c1fe78f246c8e4da2f85b249824151b
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench opacus_cifar10 amp
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval opacus_cifar10
running benchmark: 100%|█████████████████████████████████████████████████████| 50/50 [00:01<00:00, 49.87it/s]
0.568x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,opacus_cifar10,64,0.568399,12.456063,8.959801,0.893319,65.195213,72.980890,71,1,0,0,0,0,1
```
the last good commit 1e9390a30ac29ee3a4a75c184059c5c4cb3d5f0b
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench opacus_cifar10 amp
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval opacus_cifar10
running benchmark: 100%|█████████████████████████████████████████████████████| 50/50 [00:00<00:00, 57.89it/s]
0.745x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,opacus_cifar10,64,0.744655,9.613339,8.972916,0.893434,65.273856,73.059533,71,1,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>766a5e3a</td>
<td>main</td>
<td>e522b45c</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>5ef33e40b3c3fd2608552d3301c7255826c0e7f6</td>
<td>main</td>
<td>f121eab0182f7da58b39ffb84744bdc7109817e3</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.5.0a0+fa44bda</td>
<td>main</td>
<td>2.5.0a0+fa44bda</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench opacus_cifar10 amp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/a766d84a3c1fe78f246c8e4da2f85b249824151b
[torchbench-opacus_cifar10-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/17746417/torchbench-opacus_cifar10-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| oncall: pt2,oncall: cpu inductor | low | Critical |
2,658,222,703 | vscode | When hover contains one hover provider result, increase/decrease verbosity commands should work on that provider result | # Description
I created these keybindings to control hover verbosity:
```json
{
"key": "up",
"command": "editor.action.increaseHoverVerbosityLevel",
"when": "editorHoverFocused"
},
{
"key": "down",
"command": "editor.action.decreaseHoverVerbosityLevel",
"when": "editorHoverFocused"
}
```
When I focus hover (`cmd+k cmd+i` and then again `cmd+k cmd+i`) and invoke these commands using keybindings, they get invoked, but they don't do anything (because the hover provider result within hover widget is not focused).
I would assume that when there is only one hover result, the commands should work without me doing more gestures to focus the only provider result.
Additionally, I would expect that even when there are multiple hover results but only one of them allows increasing/decreasing verbosity, the commands should work without me focusing specific result. | polish,editor-hover | low | Minor |
2,658,225,274 | angular | Output migration transforms eventEmitter.emit() into invalid output.emit() | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
Yes
### Description
The output migration transforms:
```ts
@Output() eventEmitter = new EventEmitter<string>();
// later
eventEmitter.emit();
```
into
```ts
eventEmitter = output<string>();
// later
eventEmitter.emit();
```
But the value is mandatory in the `output.emit` signature, so the application breaks.
The migration could generate `emit(undefined)` in that case, or add a `TODO` or only treat this in with the `--best-effort-mode` option.
To repro, in the following Stackblitz, run:
```
ng g @angular/core:signals --migrations=outputs --path=./ --no-best-effort-mode
```
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-h5tea2?file=src%2Fmain.ts
### Please provide the exception or error you saw
```true
✘ [ERROR] TS2554: Expected 1 arguments, but got 0. [plugin angular-compiler]
src/main.ts:18:22:
18 │ this.eventEmitter.emit();
╵ ~~~~
An argument for 'value' was not provided.
node_modules/@angular/core/index.d.ts:8288:9:
8288 │ emit(value: T): void;
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.0-rc.1
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 19.0.0-rc.1
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.0-rc.1
@angular-devkit/build-angular 19.0.0-rc.1
@angular-devkit/core 19.0.0-rc.1
@angular-devkit/schematics 19.0.0-rc.1
@schematics/angular 19.0.0-rc.1
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10
```
### Anything else?
_No response_ | area: migrations,bug | low | Critical |
2,658,237,761 | next.js | Middleware does not add nonce when deployed to AWS amplify, but works well on local development | ### Link to the code that reproduces this issue
https://github.com/peterochieng/csp-test
### To Reproduce
1. Add middleware for Adding CSP directives.
`import { NextRequest, NextResponse } from 'next/server'
export const nonce = Buffer.from(crypto.randomUUID()).toString('base64')
export function middleware(request: NextRequest) {
const cspHeader = `
default-src 'self';
script-src 'self' 'nonce-${nonce}' 'unsafe-eval';
style-src 'self' 'unsafe-inline';
img-src 'self' blob: data:;
font-src 'self';
object-src 'none';
base-uri 'self';
form-action 'self';
frame-ancestors 'none';
upgrade-insecure-requests;
connect-src 'self' https://example.com https://ipapi.co/json/;
`
// Replace newline characters and spaces
const contentSecurityPolicyHeaderValue = cspHeader
.replace(/\s{2,}/g, ' ')
.trim()
const requestHeaders = new Headers(request.headers)
requestHeaders.set('x-nonce', nonce)
requestHeaders.set(
'Content-Security-Policy',
contentSecurityPolicyHeaderValue
)
const response = NextResponse.next({
request: {
headers: requestHeaders,
},
})
response.headers.set(
'Content-Security-Policy',
contentSecurityPolicyHeaderValue
)
return response
}
export const config = {
matcher: [
'/(.*)',
'/', // explicit matcher for root route
],
}
`
2. Run the application on local dev using npm run dev. Test and you'll see it works fine. Open network tab in dev tools and check the response. All scripts, have the nonce added on local dev.
3. Deploy your sample application to AWS amplify. You will get a white screen or screen with no content with errors in console in devTools. I have attached screenshot of error.

### Current vs. Expected behavior
I expect the web app to load well without issues since CSP addition is done by middleware. But, what I get is a blank screen with an error. This error.

### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro N
Binaries:
Node: 20.9.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.0.4
eslint-config-next: 14.0.4
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
Other (Deployed)
### Additional context
_No response_ | bug,Middleware | low | Critical |
2,658,269,814 | vscode | Hover gray +/- should not be focusable? Also, hover should say what grayed out buttons mean, eg "maximum verbosity achieved" | Repro:
1. open expandable hover
2. increase verbosity until max
3. click on the grayed out `+` button

Version: 1.96.0-insider
Commit: 399779942db4d7ab1bd6f6ae976482d0020f10ca
Date: 2024-11-13T05:04:32.098Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0
| polish,editor-hover | low | Minor |
2,658,284,584 | neovim | Add continuations to `vim.lsp.buf.*` functions | ## Problem
All `vim.lsp.buf.*` functions run asynchronously, however they do not provide callbacks when they finish.
This is needed so users can invoke other behaviours after calling these functions. And in general, asynchronous functions should always provide continuations of some form.
## Proposal
- Add a `callback` argument after every `opts` field in `vim.lsp.buf*`.
- `vim.lsp.buf.hover(config)` -> `vim.lsp.buf.hover(opts, callback)`
- `vim.lsp.buf.declaration(opts)` -> `vim.lsp.buf.declaration(opts, callback)`
- `vim.lsp.buf.definition(opts)` -> `vim.lsp.buf.definition(opts, callback)`
- `vim.lsp.buf.type_definition(opts)` -> `vim.lsp.buf.type_definition(opts, callback)`
- `vim.lsp.buf.implementation(opts)` -> `vim.lsp.buf.implementation(opts, callback)`
- `vim.lsp.buf.signature_help(config)` -> `vim.lsp.buf.signature_help(opts, callback)`
- `vim.lsp.buf.format(opts)` -> `vim.lsp.buf.format(opts, callback)`
- `vim.lsp.buf.rename(new_name, opts)` -> `vim.lsp.buf.rename(new_name, opts, callback)`
- `vim.lsp.buf.references(context, opts)` -> `vim.lsp.buf.references(context, opts, callback)`
- `vim.lsp.buf.document_symbol(opts)` -> `vim.lsp.buf.document_symbol(opts, callback)`
- `vim.lsp.buf.incoming_calls()` -> `vim.lsp.buf.incoming_calls(callback)`
- `vim.lsp.buf.outgoing_calls()` -> `vim.lsp.buf.outgoing_calls(callback)`
- `vim.lsp.buf.typehierarchy(kind)` -> `vim.lsp.buf.typehierarchy(kind, callback)`
- `vim.lsp.buf.workspace_symbol(query, opts)` -> `vim.lsp.buf.workspace_symbol(query, opts, callback)`
- `vim.lsp.buf.document_highlight()` -> `vim.lsp.buf.document_highlight(callback)`
- `vim.lsp.buf.code_action(opts)` -> `vim.lsp.buf.code_action(opts, callback)`
- Deprecate the `async` and `sync` options from `vim.lsp.buf.format()`
- Deprecate `vim.lsp.buf_request_sync()`
- ~(optional) If `callback` is not provided, then functions are run synchronously (like `vim.uv.*`).~
- ~How would this impact mappings?~
- Each function should return a object with the methods `cancel()` and `wait()`, similar to `vim.system()`'s `vim.SystemObj`.
### Notes:
- For functions that don't have an `opts`, `callback` will be the only option. If `opts` is required in the future then it can be added with overloads.
- `callback` could alternatively be made an option in `opts`, however this is likely to make it more awkward to use with what gets implemented in #19624, which will include `async.wrap(fun, cb_pos)` function. | enhancement,lsp,async | low | Major |
2,658,307,282 | deno | [WebSocketStream] Websocket handshake User-Agent header duplicated | Version: Deno/2.0.6
**Step 1**
Websocket Stream client (`ws.ts`):
```
const wss = new WebSocketStream('ws://1.1.1.1/' /* over http to see handshake request */, {
headers: {
'User-Agent': 'My-UA'
}
})
const { readable, writable } = await wss.opened
console.log('Connected')
```
**Step 2**
Open Wireshark to capture websocket's http handshake request
**Step 3**
Run client:
```
deno --allow-net --unstable-net ws.ts
```
**Result**
http handshake request
```
GET / HTTP/1.1
host: 1.1.1.1
upgrade: websocket
connection: Upgrade
sec-websocket-key: ZZPd2TFAskpUnsM+/1fgDQ==
user-agent: Deno/2.0.6
user-agent: My-UA
sec-websocket-version: 13
```

=> Some web servers will concat duplicate header to one or array that leads to the wrong user-agent header
**Reason**
Deno set default User-Agent header value (array) into a fetch request
Deno append new headers of WebSocketStream into existing fetch's headers array | ext/websocket | low | Minor |
2,658,321,831 | pytorch | inconsistency in ```torch.nn.functional.adaptive_avg_pool3d``` on CPU and GPU | ### 🐛 Describe the bug
consistency check of function ```torch.nn.functional.adaptive_avg_pool3d``` between CPU and GPU using a bfloat16 tensor.
```python
import torch
input_tensor = torch.tensor([
[
[
[-1.4062, 1.4609, 0.6797],
[-0.6875, -0.9492, 0.4434],
[-1.0312, -0.3730, 0.9453]
],
[
[0.9766, 0.2070, 0.8242],
[-1.6484, 1.4531, 1.7891],
[0.3945, 0.5352, -0.8711]
]
],
[
[
[-2.5000, 0.2617, -0.3613],
[-1.6094, -1.4219, -0.3281],
[-1.3594, -2.3594, -0.5312]
],
[
[-1.9375, 1.0938, 1.5547],
[-0.5820, -0.1167, 1.3438],
[1.1953, -1.3750, -1.3438]
]
]
], dtype=torch.bfloat16)
output_size = (None, 1, None)
result_cpu = torch.nn.functional.adaptive_avg_pool3d(input_tensor, output_size)
input_cuda = input_tensor.cuda()
result_gpu = torch.nn.functional.adaptive_avg_pool3d(input_cuda, output_size)
print("CPU result:\n", result_cpu)
print("GPU result:\n", result_gpu.cpu())
inconsistent = not torch.allclose(result_cpu, result_gpu.cpu(), atol=1e-02, rtol=1e-03)
print("Inconsistency with atol=1e-02 and rtol=1e-03:", inconsistent)
```
outputs:
```
CPU result:
tensor([[[[-1.0391, 0.0461, 0.6875]],
[[-0.0923, 0.7305, 0.5781]]],
[[[-1.8359, -1.1719, -0.4062]],
[[-0.4395, -0.1328, 0.5195]]]], dtype=torch.bfloat16)
GPU result:
tensor([[[[-1.0391, 0.0461, 0.6875]],
[[-0.0923, 0.7305, 0.5820]]],
[[[-1.8203, -1.1719, -0.4062]],
[[-0.4414, -0.1328, 0.5195]]]], dtype=torch.bfloat16)
Inconsistency with atol=1e-02 and rtol=1e-03: True
```
### Versions
(executed on google colab)
PyTorch version: 2.5.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.3.3
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvtx==0.2.10
[pip3] optree==0.13.0
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.0+cu121
[pip3] torchaudio==2.5.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0+cu121
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim | module: nn,module: cuda,triaged | low | Critical |
2,658,358,573 | svelte | effects.js causes Fatal Error | ### Describe the bug
Push to 'null' causes **total crash of project**.
The error causing line is at src/internal/client/reactivity/effect.js:270
`context.l.r1.push(token);`
Error message:
```shell
Uncaught (in promise) TypeError: Cannot read properties of null (reading 'r1')
```
### Reproduction
In a seperate project, which I created by the template doesn't show that effect.
Sorry, I'm not so deep into svelte-internals to know how this bug can be reproduced
External libs that caused this error:
"svelte-fa"
"svelte-file-dropzone"
I tried to add some icons with Fa in the basic example but it doesn't show the effect I face in production.
### Logs
```shell
Uncaught (in promise) TypeError: Cannot read properties of null (reading 'r1')
```
### System Info
```shell
System:
OS: Windows 11 10.0.22631
CPU: (20) x64 12th Gen Intel(R) Core(TM) i9-12900HK
Memory: 6.86 GB / 31.68 GB
Binaries:
Node: 22.11.0 - C:\Program Files\nodejs\node.EXE
Yarn: 1.22.22 - ~\AppData\Roaming\npm\yarn.CMD
npm: 9.6.4 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.7.0 - ~\scoop\shims\pnpm.EXE
Browsers:
Edge: Chromium (130.0.2849.46)
Internet Explorer: 11.0.22621.3527
npmPackages:
svelte: ^5.1.9 => 5.1.16
webpack: ^5.96.1 => 5.96.1
```
### Severity
blocking all usage of svelte | awaiting submitter | low | Critical |
2,658,503,046 | pytorch | inconsistency in ```torch.special.polygamma``` on CPU and GPU | ### 🐛 Describe the bug
getting inconsistent results of function ```torch.special.polygamma``` on CPU and GPU with a float tensor
```python #
import torch
self = torch.tensor([
[[[-1.2297606]]],
[[[-2.5341392]]],
[[[-0.4952267]]],
[[[-0.1345852]]]
], dtype=torch.float32)
result_cpu = torch.special.polygamma(2, self)
self_cuda = self.cuda()
result_gpu = torch.special.polygamma(2, self_cuda)
print("CPU result:\n", result_cpu)
print("GPU result:\n", result_gpu)
inconsistent = not torch.allclose(result_cpu, result_gpu.cpu(), atol=1e-06, rtol=1e-05)
print("Inconsistency with atol=1e-06 and rtol=1e-05:", inconsistent)
```
outputs:
```
CPU result:
tensor([[[[ 1.6105e+02]]],
[[[-6.8598e+00]]],
[[[ 9.4641e-02]]],
[[[ 8.1686e+02]]]])
GPU result:
tensor([[[[ 1.6105e+02]]],
[[[-6.8598e+00]]],
[[[ 9.4638e-02]]],
[[[ 8.1686e+02]]]], device='cuda:0')
Inconsistency with atol=1e-06 and rtol=1e-05: True
```
### Versions
(executed on google colab)
PyTorch version: 2.5.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.3.3
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvtx==0.2.10
[pip3] optree==0.13.0
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.0+cu121
[pip3] torchaudio==2.5.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0+cu121
[conda] Could not collect
cc @mruberry @kshitij12345 | triaged,module: special | low | Critical |
2,658,503,095 | TypeScript | `Intl.Locale.prototype.getTimeZones()` is missing from Intl library definitions | ### ⚙ Compilation target
ESNext
### ⚙ Library
ESNext
### Missing / Incorrect Definition
`Intl.Locale.prototype.getTimeZones()` doesn't exist in the `Intl` library definitions. It is supported in all browsers except Firefox.
### Sample Code
```TypeScript
const loc = new Intl.Locale("en-US");
const localeTimeZones = loc.getTimeZones();
```
### Documentation Link
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/Locale/getTimeZones
https://tc39.es/proposal-intl-locale-info/#sec-Intl.Locale.prototype.getTimeZones | Bug,Help Wanted,Domain: lib.d.ts | low | Minor |
2,658,552,665 | pytorch | `Tensor.prod` gives a jit warning | ### 🐛 Describe the bug
On a machine with cuda, run
```bash
python -c """import torch;torch.randn(10, device='cuda').prod()"""
```
wich raises
```
<string>:1: UserWarning: No PYTORCH_KERNEL_CACHE_PATH or HOME environment variable set! This disables kernel caching. (Triggered internally at /pytorch/aten/src/ATen/native/cuda/jit_utils.cpp:1426.)
```
### Versions
2.6.0.dev20241113+cu124
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,658,555,108 | electron | You cannot use fetch(file://xxxx) files in web workers | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.1.2
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10 22H2
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
You can use fetch file on web worker
### Actual Behavior

worker code
`
;(async () => {
await fetch('c:/Users/a/Desktop/TEST/t1/d3cdc838a18f53ad0e15e3795fbb51d9 (1).mp4').body
})()
`
failed net:ERROR_UNKNOWN_URL_SCHEME
But it's normal if I'm not using it in worker. I confirm that I have enabled webSecurity.
### Testcase Gist URL
https://gist.github.com/tetap/ccf2a1ca8c30d77958c829a1aa2f1209
### Additional Information
_No response_ | platform/windows,bug :beetle:,has-repro-gist,32-x-y | low | Critical |
2,658,614,698 | pytorch | [Feature Request] The `.forward()`/etc. shape API | [There is](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.forward) the `nn.Module.forward(...)` method. So, it’s “standardized”.
Each concrete `.forward(...)`'s documentation has its return shape (formula).
Is there an API/a project of API of the `.get_forward_shape(...)` method to get it programatically?
_Originally posted at [the forum](https://discuss.pytorch.org/t/the-forward-etc-shape-api/212509)._
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Minor |
2,658,635,153 | ui | [bug]: Typescript error in form component | ### Describe the bug
There is typescript errors within form component.
Property formState does not exist in useFormContext.
Than for part
`const fieldState = getFieldState(fieldContext.name, formState)`
getFieldState expect no arguments.
Than later we have error with usage of: ` const { error, formItemId } = useFormField()`.
TS2339: Property error does not exist on type
{ id: string; name: string; formItemId: string; formDescriptionId: string; formMessageId: string; }
Error in typescript brake process build for Nextjs.
### Affected component/components
Form
### How to reproduce
Test form component within nextjs application.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
next build
▲ Next.js 14.2.4
- Environments: .env.local
Creating an optimized production build ...
⚠ For production Image Optimization with Next.js, the optional 'sharp' package is strongly recommended. Run 'npm i sharp', and Next.js will use it automatically for Image Optimization.
Read more: https://nextjs.org/docs/messages/sharp-missing-in-production
⚠ For production Image Optimization with Next.js, the optional 'sharp' package is strongly recommended. Run 'npm i sharp', and Next.js will use it automatically for Image Optimization.
Read more: https://nextjs.org/docs/messages/sharp-missing-in-production
✓ Compiled successfully
Linting and checking validity of types .Failed to compile.
./src/components/ui/form.tsx:45:26
Type error: Property 'formState' does not exist on type '{ getFieldState: () => {}; getValues: () => {}; watch: () => {}; setValue: () => {}; register: () => {}; }'.
43 | const fieldContext = React.useContext(FormFieldContext)
44 | const itemContext = React.useContext(FormItemContext)
> 45 | const { getFieldState, formState } = useFormContext()
| ^
46 |
47 | const fieldState = getFieldState(fieldContext.name, formState)
48 |
ELIFECYCLE Command failed with exit code 1.
```
### System Info
```bash
Linux Mint, nodejs v22.11.0, nextjs 14.2.4
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,658,673,181 | material-ui | Issue with Icons after selecting | ### Related page
https://mui.com/material-ui/material-icons
### Kind of issue
Unclear explanations
### Issue description
After copying any of the Icons if we come back whole display gets stuck fix it
### Context
After copying the icons close the Dailog box and reset the state so that user can select other icons
**Search keywords**: Icons | bug 🐛,docs,package: icons,support: docs-feedback | low | Minor |
2,658,700,557 | PowerToys | Advanced Paste - Type out the Clipboard Content | ### Description of the new feature / enhancement
New Option in the "Advanced paste" menu to type out clipboard content.
### Scenario when this would be used?
Environments/Remote Connections that don't allow pasting from local machine.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,658,706,822 | pytorch | vmap with mps tensor fails to recognized batched shape | ### 🐛 Describe the bug
The following code where mps tensors are used in vmap raises an exception:
```python
import torch
device = "mps"
x = torch.randn(2, 150, device=device, requires_grad=True)
y = torch.randn(150, device=device, requires_grad=True)
torch.vmap(lambda x, y: torch.nn.functional.mse_loss(x, y), (0, None))(x, y)
```
Exception:
```
File "python3.10/site-packages/torch/nn/functional.py", line 3902, in mse_loss
return torch._C._nn.mse_loss(
RuntimeError: mse_loss_out_mps: target and input tensors must have identical shapes
```
### Versions
latest nightly
cc @zou3519 @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @Chillee @samdow @kshitij12345 | triaged,module: vmap,module: mps,module: functorch | low | Critical |
2,658,708,465 | godot | Misleading error on missing export template file | ### Tested versions
- 4.4.dev.custom_build [76fa7b291]
- 4.3-stable
- Shall be reproducible in any 4.x version
### System information
Godot v4.4.dev (76fa7b291) - Windows 10.0.22631 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 Ti (NVIDIA; 32.0.15.6603) - AMD Ryzen 9 3900X 12-Core Processor (24 threads)
### Issue description
When exporting a runnable project, if the template for the exported configuration is missing, but others are available, Godot fails with a generic _"Failed to copy export template"_ error message.
The following errors appear in console:
```
Failed to open
editor/export/editor_export_platform.h:182 - Prepare Template: Failed to copy export template.
```
This is somewhat confusing, as the user can't figure straight away what's wrong, since _other_ templates are available and the button "Export Project..." is also clickable.
The editor then shows the error message:

### Steps to reproduce
1. Open the _Project->Export_ dialog and tick the _Runnable_ option
2. Specify a custom export template, for example, the _Release_ one
3. Press the _Export Project..._ button
4. Attempt to export with the _other_ configuration, in this example, _Debug_, by checking the related option in the export dialog
Providing a Debug template and exporting a Release configuration triggers the same error.

### Minimal reproduction project (MRP)
[missing_export_template_bug.zip](https://github.com/user-attachments/files/17748598/missing_export_template_bug.zip)
| bug,topic:editor,topic:export | low | Critical |
2,658,722,238 | rust | internal compiler error on 1.84.0-nightly (8adb4b30f 2024-11-13) | ```
thread 'rustc' panicked at compiler/rustc_middle/src/mir/interpret/queries.rs:105:13:
Box<dyn Any>
stack backtrace:
0: 0x7f51d7741fe5 - std::backtrace::Backtrace::create::h9b3de1e4dfc1f21c
1: 0x7f51d5b4ff75 - std::backtrace::Backtrace::force_capture::h694040df2c86ceb1
2: 0x7f51d4bee345 - std[765cb8723245af2b]::panicking::update_hook::<alloc[a932e0534ac38218]::boxed::Box<rustc_driver_impl[7eab4ea623a09f02]::install_ice_hook::{closure#0}>>::{closure#0}
3: 0x7f51d5b677e8 - std::panicking::rust_panic_with_hook::h2f0f6e532df4efd6
4: 0x7f51d4c28051 - std[765cb8723245af2b]::panicking::begin_panic::<rustc_errors[c2fc7bc1b0cd5a2e]::ExplicitBug>::{closure#0}
5: 0x7f51d4c1b026 - std[765cb8723245af2b]::sys::backtrace::__rust_end_short_backtrace::<std[765cb8723245af2b]::panicking::begin_panic<rustc_errors[c2fc7bc1b0cd5a2e]::ExplicitBug>::{closure#0}, !>
6: 0x7f51d4c16669 - std[765cb8723245af2b]::panicking::begin_panic::<rustc_errors[c2fc7bc1b0cd5a2e]::ExplicitBug>
7: 0x7f51d4c31da1 - <rustc_errors[c2fc7bc1b0cd5a2e]::diagnostic::BugAbort as rustc_errors[c2fc7bc1b0cd5a2e]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
8: 0x7f51d52b27d3 - rustc_middle[69f5a6c8c8dd810f]::util::bug::opt_span_bug_fmt::<rustc_span[7e5fda6d6044ef5d]::span_encoding::Span>::{closure#0}
9: 0x7f51d529901a - rustc_middle[69f5a6c8c8dd810f]::ty::context::tls::with_opt::<rustc_middle[69f5a6c8c8dd810f]::util::bug::opt_span_bug_fmt<rustc_span[7e5fda6d6044ef5d]::span_encoding::Span>::{closure#0}, !>::{closure#0}
10: 0x7f51d5298eab - rustc_middle[69f5a6c8c8dd810f]::ty::context::tls::with_context_opt::<rustc_middle[69f5a6c8c8dd810f]::ty::context::tls::with_opt<rustc_middle[69f5a6c8c8dd810f]::util::bug::opt_span_bug_fmt<rustc_span[7e5fda6d6044ef5d]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
11: 0x7f51d34e3940 - rustc_middle[69f5a6c8c8dd810f]::util::bug::bug_fmt
12: 0x7f51d7dd4ab5 - <rustc_middle[69f5a6c8c8dd810f]::ty::context::TyCtxt>::const_eval_resolve_for_typeck.cold
13: 0x7f51d6d8938d - rustc_trait_selection[d9fdd1a0336c780b]::traits::try_evaluate_const
14: 0x7f51d6cbcac3 - <rustc_trait_selection[d9fdd1a0336c780b]::traits::normalize::AssocTypeNormalizer as rustc_type_ir[2f0ec634f55d755f]::fold::TypeFolder<rustc_middle[69f5a6c8c8dd810f]::ty::context::TyCtxt>>::fold_const
15: 0x7f51d6cc1bcd - <rustc_middle[69f5a6c8c8dd810f]::ty::Ty as rustc_type_ir[2f0ec634f55d755f]::fold::TypeSuperFoldable<rustc_middle[69f5a6c8c8dd810f]::ty::context::TyCtxt>>::try_super_fold_with::<rustc_trait_selection[d9fdd1a0336c780b]::traits::normalize::AssocTypeNormalizer>
16: 0x7f51d6cbe49e - <rustc_type_ir[2f0ec634f55d755f]::ty_kind::FnSig<rustc_middle[69f5a6c8c8dd810f]::ty::context::TyCtxt> as rustc_type_ir[2f0ec634f55d755f]::fold::TypeFoldable<rustc_middle[69f5a6c8c8dd810f]::ty::context::TyCtxt>>::try_fold_with::<rustc_trait_selection[d9fdd1a0336c780b]::traits::normalize::AssocTypeNormalizer>
17: 0x7f51d69ef560 - <rustc_hir_typeck[e3dd38b4cb6ae428]::method::confirm::ConfirmContext>::confirm
18: 0x7f51d6f14fae - <rustc_hir_typeck[e3dd38b4cb6ae428]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
19: 0x7f51d64e04c7 - <rustc_hir_typeck[e3dd38b4cb6ae428]::fn_ctxt::FnCtxt>::check_decl
20: 0x7f51d6f0eeb8 - <rustc_hir_typeck[e3dd38b4cb6ae428]::fn_ctxt::FnCtxt>::check_expr_block
21: 0x7f51d6f1423a - <rustc_hir_typeck[e3dd38b4cb6ae428]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
22: 0x7f51d64d0240 - rustc_hir_typeck[e3dd38b4cb6ae428]::check::check_fn
23: 0x7f51d64c5cb5 - rustc_hir_typeck[e3dd38b4cb6ae428]::typeck
24: 0x7f51d64c5653 - rustc_query_impl[31a007ddad05a48]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[31a007ddad05a48]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[69f5a6c8c8dd810f]::query::erase::Erased<[u8; 8usize]>>
25: 0x7f51d67fab68 - rustc_query_system[81196f4961d73ee8]::query::plumbing::try_execute_query::<rustc_query_impl[31a007ddad05a48]::DynamicConfig<rustc_query_system[81196f4961d73ee8]::query::caches::VecCache<rustc_span[7e5fda6d6044ef5d]::def_id::LocalDefId, rustc_middle[69f5a6c8c8dd810f]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[31a007ddad05a48]::plumbing::QueryCtxt, true>
26: 0x7f51d66853d4 - rustc_query_impl[31a007ddad05a48]::query_impl::typeck::get_query_incr::__rust_end_short_backtrace
27: 0x7f51d67f6647 - <rustc_middle[69f5a6c8c8dd810f]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[54fbd18308a238b5]::check_crate::{closure#4}>::{closure#0}
28: 0x7f51d67f4619 - rustc_hir_analysis[54fbd18308a238b5]::check_crate
29: 0x7f51d681eb8a - rustc_interface[edb5f17ab32af0ca]::passes::run_required_analyses
30: 0x7f51d710f2de - rustc_interface[edb5f17ab32af0ca]::passes::analysis
31: 0x7f51d710f2af - rustc_query_impl[31a007ddad05a48]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[31a007ddad05a48]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[69f5a6c8c8dd810f]::query::erase::Erased<[u8; 1usize]>>
32: 0x7f51d736187a - rustc_query_system[81196f4961d73ee8]::query::plumbing::try_execute_query::<rustc_query_impl[31a007ddad05a48]::DynamicConfig<rustc_query_system[81196f4961d73ee8]::query::caches::SingleCache<rustc_middle[69f5a6c8c8dd810f]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[31a007ddad05a48]::plumbing::QueryCtxt, true>
33: 0x7f51d7361377 - rustc_query_impl[31a007ddad05a48]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
34: 0x7f51d71d93ba - rustc_interface[edb5f17ab32af0ca]::interface::run_compiler::<core[ad6a825abd6232fe]::result::Result<(), rustc_span[7e5fda6d6044ef5d]::ErrorGuaranteed>, rustc_driver_impl[7eab4ea623a09f02]::run_compiler::{closure#0}>::{closure#1}
35: 0x7f51d72711d0 - std[765cb8723245af2b]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[edb5f17ab32af0ca]::util::run_in_thread_with_globals<rustc_interface[edb5f17ab32af0ca]::util::run_in_thread_pool_with_globals<rustc_interface[edb5f17ab32af0ca]::interface::run_compiler<core[ad6a825abd6232fe]::result::Result<(), rustc_span[7e5fda6d6044ef5d]::ErrorGuaranteed>, rustc_driver_impl[7eab4ea623a09f02]::run_compiler::{closure#0}>::{closure#1}, core[ad6a825abd6232fe]::result::Result<(), rustc_span[7e5fda6d6044ef5d]::ErrorGuaranteed>>::{closure#0}, core[ad6a825abd6232fe]::result::Result<(), rustc_span[7e5fda6d6044ef5d]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ad6a825abd6232fe]::result::Result<(), rustc_span[7e5fda6d6044ef5d]::ErrorGuaranteed>>
36: 0x7f51d72715eb - <<std[765cb8723245af2b]::thread::Builder>::spawn_unchecked_<rustc_interface[edb5f17ab32af0ca]::util::run_in_thread_with_globals<rustc_interface[edb5f17ab32af0ca]::util::run_in_thread_pool_with_globals<rustc_interface[edb5f17ab32af0ca]::interface::run_compiler<core[ad6a825abd6232fe]::result::Result<(), rustc_span[7e5fda6d6044ef5d]::ErrorGuaranteed>, rustc_driver_impl[7eab4ea623a09f02]::run_compiler::{closure#0}>::{closure#1}, core[ad6a825abd6232fe]::result::Result<(), rustc_span[7e5fda6d6044ef5d]::ErrorGuaranteed>>::{closure#0}, core[ad6a825abd6232fe]::result::Result<(), rustc_span[7e5fda6d6044ef5d]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ad6a825abd6232fe]::result::Result<(), rustc_span[7e5fda6d6044ef5d]::ErrorGuaranteed>>::{closure#1} as core[ad6a825abd6232fe]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
37: 0x7f51d72720b9 - std::sys::pal::unix::thread::Thread::new::thread_start::hdf9fef71b4d1e8bf
38: 0x7f51d14d1609 - start_thread
39: 0x7f51d13f6353 - clone
40: 0x0 - <unknown>
rustc version: 1.84.0-nightly (8adb4b30f 2024-11-13)
platform: x86_64-unknown-linux-gnu
query stack during panic:
#0 [typeck] type-checking `column_major::ifft_multithread::butterfly_direct_radix_2`
#1 [analysis] running analysis passes on this crate
end of query stack
``` | I-ICE,T-compiler,C-bug,S-needs-repro | low | Critical |
2,658,723,469 | deno | deno.lock impacts which version is used, despite explicit version import | We noticed (in this issue: https://github.com/honojs/middleware/issues/803) that when a dependency is listed several times with different versions in the `deno.lock` file, different versions are somehow used in different part of the code (see the error message of this comment: https://github.com/honojs/middleware/issues/803#issuecomment-2475694935). Even if an only explicit version is used in the code.
Reproduction example:
```ts
import { z } from 'npm:zod@3.23.8'
import { zValidator } from 'npm:@hono/zod-validator@0.4.1'
import { Hono } from 'npm:hono@4.6.10'
const app = new Hono().put(
'/posts',
zValidator('json', z.object({ id: z.number() })),
(c) => {
const json = c.req.valid('json')
return c.json({ status: 'ok' })
},
)
```
If we remove the `deno.lock` file, the type of the `json` variable is well inferred. If we have a `deno.lock` file with different versions, the type will be wrongly inferred to `never` because different versions will be used (see in this comment hono is used in both versions `4.6.8` and `4.6.10`).
I have noticed that if we only have one dependency in the `deno.lock` file, it works well:
```json
"specifiers": {
"npm:@hono/zod-validator@0.4.1": "0.4.1_hono@4.6.10_zod@3.23.8",
"npm:zod@3.23.8": "3.23.8",
"npm:hono@4.6.10": "4.6.10"
},
```
but if we have several times the dependency, it does not work:
```json
"specifiers": {
"npm:@hono/zod-validator@0.4.1": "0.4.1_hono@4.6.10_zod@3.23.8",
"npm:zod@3.23.8": "3.23.8",
"npm:hono@4.6.8": "4.6.8",
"npm:hono@4.6.10": "4.6.10"
},
```
```bash
$ deno --version
deno 2.0.6 (stable, release, x86_64-unknown-linux-gnu)
v8 12.9.202.13-rusty
typescript 5.6.2
``` | needs investigation | low | Critical |
2,658,743,889 | next.js | Pages router with middleware (on vercel) - 404 results returning 200 when prefetched | ### Link to the code that reproduces this issue
https://github.com/magicspon/next-link-test
### To Reproduce
Build and deploy the app to vercel.
Go to the URL
Click on the "Broken link" button.
Observe, the 404 page is not shown.
This only happens when running on vercel, it's fine on localhost.
It also only happens when there is middleware on the site, even though these urls aren't matched by the middleware
### Current vs. Expected behavior
When clicking on a link that 404's, I would expect to see the 404 page.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: 1.22.19
pnpm: 8.14.0
Relevant Packages:
next: 15.0.4-canary.11 // Latest available version is detected (15.0.4-canary.11).
eslint-config-next: 15.0.3
react: 19.0.0-rc-7ac8e612-20241113
react-dom: 19.0.0-rc-7ac8e612-20241113
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Middleware, Pages Router
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
I've tested against the latest v14 release, the latest 15 release and the latest 15 canary release.
https://github.com/vercel/next.js/issues/57207
https://github.com/vercel/next.js/issues/56222
| create-next-app,bug,Middleware,Pages Router | low | Critical |
2,658,744,420 | flutter | [flutter_svg] building example app produces some warnings | ### Steps to reproduce
1. clone flutter/packages
2. open `packages/third_party/packages/flutter_svg/example`
3. `flutter build web --wasm` (although the wasm option is probably not needed)
4. Observe the logs, which indicate that the example app still uses the old format and uses an incomplete setup for cupertino_icons
### Expected results
No warnings in the terminal
### Actual results
There are 2 warnings when building the example app:
1) The index.html uses an outdated template
2) The cupertino icons font is referenced, but not found (the example pubspec probably needs updating, or we remove uses of cupertino_icons entirely from the example)
```
Warning: In index.html:27: Manual service worker registration deprecated. Use flutter.js service worker bootstrapping instead. See
https://docs.flutter.dev/platform-integration/web/initialization for more details.
Expected to find fonts for (MaterialIcons, packages/cupertino_icons/CupertinoIcons), but found (MaterialIcons). This usually means you are
referring to font families in an IconData class but not including them in the assets section of your pubspec.yaml, are missing the package
that would include them, or are missing "uses-material-design: true".
```
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
navaronbracke@MacBook-Pro-van-Navaron example % flutter build web --wasm
┌─ New feature ────────────────────────────────────────────────────────────────────────────┐
│ WebAssembly compilation is new. Understand the details before deploying to production. │
│ See https://flutter.dev/to/wasm for more information. │
└──────────────────────────────────────────────────────────────────────────────────────────┘
Warning: In index.html:27: Manual service worker registration deprecated. Use flutter.js service worker bootstrapping instead. See
https://docs.flutter.dev/platform-integration/web/initialization for more details.
Expected to find fonts for (MaterialIcons, packages/cupertino_icons/CupertinoIcons), but found (MaterialIcons). This usually means you are
referring to font families in an IconData class but not including them in the assets section of your pubspec.yaml, are missing the package
that would include them, or are missing "uses-material-design: true".
Font asset "MaterialIcons-Regular.otf" was tree-shaken, reducing it from 1645184 to 7692 bytes (99.5% reduction). Tree-shaking can be
disabled by providing the --no-tree-shake-icons flag when building your app.
Compiling lib/main.dart for the Web... 3.5s
✓ Built build/web
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-x64, locale
en-BE)
• Flutter version 3.24.3 on channel stable at
/Users/navaronbracke/Documents/flutter
• Upstream repository git@github.com:navaronbracke/flutter.git
• FLUTTER_GIT_URL = git@github.com:navaronbracke/flutter.git
• Framework revision 2663184aa7 (9 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/navaronbracke/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /Users/navaronbracke/Library/Android/sdk
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.95.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-x64 • macOS 14.6.1 23G93 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.69
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| d: examples,package,has reproducible steps,P3,team-engine,triaged-engine,found in release: 3.24,found in release: 3.27,p: flutter_svg | low | Minor |
2,658,760,975 | next.js | Dynamic Sitemap throwing 404 error on Nextjs 15 | ### Link to the code that reproduces this issue
https://github.com/umair-mirza/pakcrunch-new/blob/main/src/app/sitemap.ts
### To Reproduce
I have created a dynamic sitemap for my Nextjs 15 app in the following path:
src > app > sitemap.ts
I have created the sitemap according to the example quoted in the Nextjs documentation:
https://nextjs.org/docs/app/api-reference/file-conventions/metadata/sitemap
Here's the code:
```
import prisma from "@/lib/prisma";
import type { MetadataRoute } from "next";
export const revalidate = 86400;
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
const data = await prisma.post.findMany({
select: {
slug: true,
},
});
const posts = data.map((item) => ({
url: `${process.env.NEXT_PUBLIC_PRODUCTION_URL}/posts/${item.slug}`,
lastModified: new Date(),
changeFrequency: "monthly" as "monthly",
priority: 0.5,
}));
return [
{
url: `${process.env.NEXT_PUBLIC_PRODUCTION_URL}/posts`,
lastModified: new Date(),
changeFrequency: "daily",
priority: 1,
},
{
url: `${process.env.NEXT_PUBLIC_PRODUCTION_URL}/terms-of-use`,
priority: 0.1,
},
{
url: `${process.env.NEXT_PUBLIC_PRODUCTION_URL}/contact-us`,
priority: 0.2,
},
...posts,
];
}
```
### Current vs. Expected behavior
I have created a dynamic sitemap for my Nextjs 15 app in the following path:
src > app > sitemap.ts
I have created the sitemap according to the example quoted in the Nextjs documentation:
https://nextjs.org/docs/app/api-reference/file-conventions/metadata/sitemap
However, when I visit the url: localhost:3000/sitemap.xml
Same result on production URL.
it returns 404.

### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Home
Available memory (MB): 16226
Available CPU cores: 8
Binaries:
Node: 20.16.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-rc.0 // Latest available version is detected (15.0.0-rc.0).
eslint-config-next: 15.0.0-rc.0
react: 19.0.0-rc-f994737d14-20240522
react-dom: 19.0.0-rc-f994737d14-20240522
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug | low | Critical |
2,658,773,444 | langchain | LangChain incorrectly applies strict (OpenAIs Structured Output) in schema generation from Pydantic model with union, causing OpenAI validation errors | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import getpass
import os
from typing import Union
from langchain_openai import ChatOpenAI
from openai import OpenAI
from pydantic import BaseModel, Field
class Cooked(BaseModel):
cooked_recipe_name: str = Field(
description='Recipe name.'
)
class Baked(BaseModel):
baked_recipe_name: str = Field(
description='Recipe name.'
)
class Recipes(BaseModel):
recipes: list[Union[Cooked, Baked]] = Field(
description='A list of recipes or tables representing the data sources.'
)
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
client = OpenAI()
model = "gpt-4o-2024-08-06"
recipe_names = "Garlic Butter Lobster Tail, Pumpkin Spice Muffins, Spicy Korean Beef Bulgogi, Raspberry Almond Scones"
completion = client.beta.chat.completions.parse(
model=model,
messages=[
{"role": "user", "content": recipe_names},
],
response_format=Recipes,
)
print("OpenAI:", completion.choices[0].message.parsed)
llm = ChatOpenAI(model=model)
llm = llm.with_structured_output(Recipes, strict=True)
response = llm.invoke(recipe_names)
print("Langchain:", response)
```
### Error Message and Stack Trace (if applicable)
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'Recipes': In context=('properties', 'recipes', 'items', 'anyOf', '0'), 'additionalProperties' is required to be supplied and to be false.", 'type': 'invalid_request_error', 'param': 'tools[0].function.parameters', 'code': 'invalid_function_parameters'}}
### Description
The problem is that LangChain incorrectly applies the strict keyword to the schema generated from the Pydantic model for the specific case in the example code above. The culprit here is the union. The same Pydantic class works when I feed it directly to OpenAI. When I injected 'additionalProperties': False into the schema for the `Baked` and `Cooked` classes, the issue disappeared.
I encountered the same problem when I had a Pydantic tool schema with a union similar to the one above.
OpenAI only threw error when `cooked_recipe_name` and `baked_recipe_name` were `recipe_name`, so I guess that is something to be careful about.
OpenAI schema before calling API:
```
{'$defs': {'Baked': {'properties': {'baked_recipe_name': {'description': 'Recipe name.', 'title': 'Baked Recipe Name', 'type': 'string'}}, 'required': ['baked_recipe_name'], 'title': 'Baked', 'type': 'object', 'additionalProperties': False}, 'Cooked': {'properties': {'cooked_recipe_name': {'description': 'Recipe name.', 'title': 'Cooked Recipe Name', 'type': 'string'}}, 'required': ['cooked_recipe_name'], 'title': 'Cooked', 'type': 'object', 'additionalProperties': False}}, 'properties': {'recipes': {'description': 'A list of recipes or tables representing the data sources.', 'items': {'anyOf': [{'$ref': '#/$defs/Cooked'}, {'$ref': '#/$defs/Baked'}]}, 'title': 'Recipes', 'type': 'array'}}, 'required': ['recipes'], 'title': 'Recipes', 'type': 'object', 'additionalProperties': False}
```
LangChain schema before calling API:
```
{'name': 'Recipes', 'description': '', 'parameters': {'properties': {'recipes': {'description': 'A list of recipes or tables representing the data sources.', 'items': {'anyOf': [{'properties': {'cooked_recipe_name': {'description': 'Recipe name.', 'title': 'Cooked Recipe Name', 'type': 'string'}}, 'required': ['cooked_recipe_name'], 'title': 'Cooked', 'type': 'object'}, {'properties': {'baked_recipe_name': {'description': 'Recipe name.', 'title': 'Baked Recipe Name', 'type': 'string'}}, 'required': ['baked_recipe_name'], 'title': 'Baked', 'type': 'object'}]}, 'type': 'array'}}, 'required': ['recipes'], 'type': 'object', 'additionalProperties': False}, 'strict': True}
```
### System Info
aiohappyeyeballs==2.4.3
aiohttp==3.11.0
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.6.2.post1
async-timeout==4.0.3
attrs==24.2.0
certifi==2024.8.30
charset-normalizer==3.4.0
distro==1.9.0
exceptiongroup==1.2.2
frozenlist==1.5.0
greenlet==3.1.1
h11==0.14.0
httpcore==1.0.6
httpx==0.27.2
idna==3.10
jiter==0.7.1
jsonpatch==1.33
jsonpointer==3.0.0
langchain==0.3.7
langchain-core==0.3.18
langchain-openai==0.2.8
langchain-text-splitters==0.3.2
langsmith==0.1.143
multidict==6.1.0
numpy==1.26.4
openai==1.54.4
orjson==3.10.11
packaging==24.2
propcache==0.2.0
pydantic==2.9.2
pydantic_core==2.23.4
PyYAML==6.0.2
regex==2024.11.6
requests==2.32.3
requests-toolbelt==1.0.0
sniffio==1.3.1
SQLAlchemy==2.0.36
tenacity==9.0.0
tiktoken==0.8.0
tqdm==4.67.0
typing_extensions==4.12.2
urllib3==2.2.3
yarl==1.17.1 | 🤖:bug,investigate | low | Critical |
2,658,784,395 | transformers | Better error message when loading adapter models with peft dependency missing | ### Feature request
Loading adapter models (such as https://huggingface.co/lightonai/MonoQwen2-VL-v0.1/tree/main) fails with an error message when peft isn't installed. The error message
`OSError: lightonai/MonoQwen2-VL-v0.1 does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.`
is a bit cryptic and requires the user to understand that
- the model that will be loaded is a peft adapter
- peft isn't installed in the current env
To improve UX, it would be useful to show a different error message such as `"The model lightonai/MonoQwen2-VL-v0.1 is an adapter model. To load it, you need to install peft (hint: run `pip install peft`)".`
### Motivation
Improve UX. The user may get the impression that the model repository is corrupted.
### Your contribution
This feature should probably be implemented by core maintainers that are familiar with the internals of the model loading code. | Feature request,PEFT | low | Critical |
2,658,859,418 | pytorch | MPS random number generation is slow, if not hanging forever | ### 🐛 Describe the bug
The following benchmarks are at least 3x - if not 10x slower on mps than cpu on a recent macbook pro M3
```python
from torch.utils.benchmark import Timer
import torch
print(Timer("torch.randint(1_000_000, (500,), device='cpu')", globals=globals()).adaptive_autorange())
print(Timer("torch.randint(1_000_000, (500,), device='mps')", globals=globals()).adaptive_autorange(max_run_time=10))
print(Timer("torch.randn((500,), device='cpu')", globals=globals()).adaptive_autorange())
print(Timer("torch.randn((500,), device='mps')", globals=globals()).adaptive_autorange(max_run_time=10))
print(Timer("torch.rand((500,), device='cpu')", globals=globals()).adaptive_autorange())
print(Timer("torch.rand((500,), device='mps')", globals=globals()).adaptive_autorange(max_run_time=10))
```
Results
```
<torch.utils.benchmark.utils.common.Measurement object at 0x113305720>
torch.randint(1_000_000, (500,), device='cpu')
Median: 1.95 us
IQR: 0.02 us (1.94 to 1.97)
6 measurements, 1000 runs per measurement, 1 thread
<torch.utils.benchmark.utils.common.Measurement object at 0x113306320>
torch.randint(1_000_000, (500,), device='mps')
Median: 19.48 us
IQR: 1.95 us (19.25 to 21.20)
2094 measurements, 1 runs per measurement, 1 thread
<torch.utils.benchmark.utils.common.Measurement object at 0x113306080>
torch.randn((500,), device='cpu')
Median: 4.96 us
IQR: 0.17 us (4.88 to 5.04)
1999 measurements, 1 runs per measurement, 1 thread
<torch.utils.benchmark.utils.common.Measurement object at 0x113304eb0>
torch.randn((500,), device='mps')
Median: 15.87 us
IQR: 1.17 us (15.67 to 16.83)
598 measurements, 1 runs per measurement, 1 thread
<torch.utils.benchmark.utils.common.Measurement object at 0x113305000>
torch.rand((500,), device='cpu')
Median: 2.12 us
IQR: 0.13 us (2.04 to 2.17)
4473 measurements, 1 runs per measurement, 1 thread
<torch.utils.benchmark.utils.common.Measurement object at 0x1133060e0>
torch.rand((500,), device='mps')
Median: 16.44 us
IQR: 1.00 us (16.21 to 17.21)
582 measurements, 1 runs per measurement, 1 thread
```
Worse, running this script a couple of times usually leads to the shell being idle even with `max_run_time` set!
### Versions
latest nightly
cc @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: performance,triaged,module: mps | low | Critical |
2,658,869,770 | react | [Feature Request] Warn for arrays resulting of `.map()` or `.filter()` in hook dependencies | ESLint plugin react hooks version: 4.6.0
## Steps To Reproduce
```tsx
const MyComponent = () => {
const array = useMemo(() => ["banana", "apple"], []);
const filteredArray = array.filter((element) => element !== "apple");
useEffect(() => {
console.log("If this is printed, array has changed");
}, [array]);
useEffect(() => {
console.log("If this is printed, filteredArray has changed");
}, [filteredArray]);
const [counter, setCounter] = useState(0);
return <button onClick={() => setCounter(counter + 1)}>Re-render</button>;
};
```
## Feature request
The exhaustive-deps rule already warns for functions declared in the scope of a component, which will be re-declared at each render and thus be a problem if put inside the dependency array of a `useEffect`, `useCallback` or `useMemo`. I recently had a problem when applying the `.filter()` method on a memoized array, since the new resulting array is a new one at each render.
I was wondering if it would be possible to warn for this kind of issues as well, the same way functions declared inside the component are currently warned.
I don't have an exhaustive list of such methods (that return a new object from a given one) that would be warned, other than `.filter()` and `.map()`, perhaps people in the comments can propose others?
## Where to do this
I have dived a little bit in the code of the package and found the `scanForConstructions` function ([here](https://github.com/facebook/react/blob/380f5d675d2269f090d15c3f92e10de66e12516c/packages/eslint-plugin-react-hooks/src/ExhaustiveDeps.js#L1584)) that seems to be the one that checks for the functions. If this is the way to do it, I could try to implement this other behaviour in this same function.
| Status: Unconfirmed | low | Minor |
2,658,893,626 | go | x/net/http2: several benchmarks crash | ### Go version
go version go1.22.8 darwin/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='auto'
GOARCH='amd64'
GOBIN=''
GOCACHE='/tmp/.gocache'
GOENV='/Users/rittneje/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/rittneje/go/pkg/mod'
GONOPROXY='[REDACTED]'
GONOSUMDB='[REDACTED]'
GOOS='darwin'
GOPATH='/Users/rittneje/go'
GOPRIVATE='[REDACTED]'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/rittneje/go1.22.8'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/Users/rittneje/go1.22.8/pkg/tool/darwin_amd64'
GOVCS='[REDACTED]'
GOVERSION='go1.22.8'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/rittneje/golang.org_x_net/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/kf/kr7_s3xx0l12zbj3jrn082hmzy5gvy/T/go-build1111184570=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Tried to run any benchmark from golang.org/x/net/http2 that uses `newServerTesterWithRealConn`.
1. `BenchmarkServerGets`
2. `BenchmarkServerPosts`
3. `BenchmarkServerToClientStreamDefaultOptions`
4. `BenchmarkServerToClientStreamReuseFrames`
5. `BenchmarkServer_GetRequest`
6. `BenchmarkServer_PostRequest`
### What did you see happen?
They all crash, because `newServerTesterWithRealConn` neglects to initialize the `serverTester`'s `group` field.
```
$ go test -run=^$ -bench=BenchmarkServerGets ./http2
goos: darwin
goarch: amd64
pkg: golang.org/x/net/http2
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xd4c36c7]
goroutine 28 [running]:
golang.org/x/net/http2.(*synctestGroup).idle(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:84 +0x47
golang.org/x/net/http2.(*synctestGroup).Wait(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:75 +0x2f
golang.org/x/net/http2.(*serverTester).sync(...)
/Users/rittneje/golang.org_x_net/http2/server_test.go:336
golang.org/x/net/http2.(*serverTester).greetAndCheckSettings(0xc0001703c0, 0xd66f888)
/Users/rittneje/golang.org_x_net/http2/server_test.go:440 +0x85
golang.org/x/net/http2.(*serverTester).greet(0xc0001703c0)
/Users/rittneje/golang.org_x_net/http2/server_test.go:433 +0x35
golang.org/x/net/http2.BenchmarkServerGets(0xc000166c88)
/Users/rittneje/golang.org_x_net/http2/server_test.go:2884 +0xae
testing.(*B).runN(0xc000166c88, 0x1)
/Users/rittneje/go1.22.8/src/testing/benchmark.go:193 +0xf8
testing.(*B).run1.func1()
/Users/rittneje/go1.22.8/src/testing/benchmark.go:215 +0x4e
created by testing.(*B).run1 in goroutine 1
/Users/rittneje/go1.22.8/src/testing/benchmark.go:208 +0x90
exit status 2
FAIL golang.org/x/net/http2 0.672s
FAIL
```
```
$ go test -run=^$ -bench=BenchmarkServerPosts ./http2
goos: darwin
goarch: amd64
pkg: golang.org/x/net/http2
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkServerPosts-16 panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x18ea6c7]
goroutine 5 [running]:
golang.org/x/net/http2.(*synctestGroup).idle(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:84 +0x47
golang.org/x/net/http2.(*synctestGroup).Wait(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:75 +0x2f
golang.org/x/net/http2.(*serverTester).sync(...)
/Users/rittneje/golang.org_x_net/http2/server_test.go:336
golang.org/x/net/http2.(*serverTester).greetAndCheckSettings(0xc0001703c0, 0x1a96888)
/Users/rittneje/golang.org_x_net/http2/server_test.go:440 +0x85
golang.org/x/net/http2.(*serverTester).greet(0xc0001703c0)
/Users/rittneje/golang.org_x_net/http2/server_test.go:433 +0x35
golang.org/x/net/http2.BenchmarkServerPosts(0xc000166c88)
/Users/rittneje/golang.org_x_net/http2/server_test.go:2926 +0xeb
testing.(*B).runN(0xc000166c88, 0x1)
/Users/rittneje/go1.22.8/src/testing/benchmark.go:193 +0xf8
testing.(*B).run1.func1()
/Users/rittneje/go1.22.8/src/testing/benchmark.go:215 +0x4e
created by testing.(*B).run1 in goroutine 1
/Users/rittneje/go1.22.8/src/testing/benchmark.go:208 +0x90
exit status 2
FAIL golang.org/x/net/http2 0.287s
FAIL
```
```
$ go test -run=^$ -bench=BenchmarkServer_GetRequest ./http2
goos: darwin
goarch: amd64
pkg: golang.org/x/net/http2
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkServer_GetRequest-16 panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xd58c6c7]
goroutine 12 [running]:
golang.org/x/net/http2.(*synctestGroup).idle(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:84 +0x47
golang.org/x/net/http2.(*synctestGroup).Wait(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:75 +0x2f
golang.org/x/net/http2.(*serverTester).sync(...)
/Users/rittneje/golang.org_x_net/http2/server_test.go:336
golang.org/x/net/http2.(*serverTester).greetAndCheckSettings(0xc0000f63c0, 0xd738888)
/Users/rittneje/golang.org_x_net/http2/server_test.go:440 +0x85
golang.org/x/net/http2.(*serverTester).greet(0xc0000f63c0)
/Users/rittneje/golang.org_x_net/http2/server_test.go:433 +0x35
golang.org/x/net/http2.BenchmarkServer_GetRequest(0xc0000eec88)
/Users/rittneje/golang.org_x_net/http2/server_test.go:3278 +0xeb
testing.(*B).runN(0xc0000eec88, 0x1)
/Users/rittneje/go1.22.8/src/testing/benchmark.go:193 +0xf8
testing.(*B).run1.func1()
/Users/rittneje/go1.22.8/src/testing/benchmark.go:215 +0x4e
created by testing.(*B).run1 in goroutine 1
/Users/rittneje/go1.22.8/src/testing/benchmark.go:208 +0x90
exit status 2
FAIL golang.org/x/net/http2 0.300s
FAIL
```
```
$ go test -run=^$ -bench=BenchmarkServer_PostRequest ./http2
goos: darwin
goarch: amd64
pkg: golang.org/x/net/http2
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkServer_PostRequest-16 panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x5f4d6c7]
goroutine 26 [running]:
golang.org/x/net/http2.(*synctestGroup).idle(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:84 +0x47
golang.org/x/net/http2.(*synctestGroup).Wait(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:75 +0x2f
golang.org/x/net/http2.(*serverTester).sync(...)
/Users/rittneje/golang.org_x_net/http2/server_test.go:336
golang.org/x/net/http2.(*serverTester).greetAndCheckSettings(0xc0001703c0, 0x60f9888)
/Users/rittneje/golang.org_x_net/http2/server_test.go:440 +0x85
golang.org/x/net/http2.(*serverTester).greet(0xc0001703c0)
/Users/rittneje/golang.org_x_net/http2/server_test.go:433 +0x35
golang.org/x/net/http2.BenchmarkServer_PostRequest(0xc000166c88)
/Users/rittneje/golang.org_x_net/http2/server_test.go:3315 +0xeb
testing.(*B).runN(0xc000166c88, 0x1)
/Users/rittneje/go1.22.8/src/testing/benchmark.go:193 +0xf8
testing.(*B).run1.func1()
/Users/rittneje/go1.22.8/src/testing/benchmark.go:215 +0x4e
created by testing.(*B).run1 in goroutine 1
/Users/rittneje/go1.22.8/src/testing/benchmark.go:208 +0x90
exit status 2
FAIL golang.org/x/net/http2 0.304s
FAIL
```
```
$ go test -run=^$ -bench=BenchmarkServerToClientStreamDefaultOptions ./http2
goos: darwin
goarch: amd64
pkg: golang.org/x/net/http2
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkServerToClientStreamDefaultOptions-16 panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x228d6c7]
goroutine 50 [running]:
golang.org/x/net/http2.(*synctestGroup).idle(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:84 +0x47
golang.org/x/net/http2.(*synctestGroup).Wait(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:75 +0x2f
golang.org/x/net/http2.(*serverTester).sync(...)
/Users/rittneje/golang.org_x_net/http2/server_test.go:336
golang.org/x/net/http2.(*serverTester).greetAndCheckSettings(0xc0001e83c0, 0x2439888)
/Users/rittneje/golang.org_x_net/http2/server_test.go:440 +0x85
golang.org/x/net/http2.(*serverTester).greet(0xc0001e83c0)
/Users/rittneje/golang.org_x_net/http2/server_test.go:433 +0x35
golang.org/x/net/http2.benchmarkServerToClientStream(0xc0001dec88, {0x0, 0x0, 0x0})
/Users/rittneje/golang.org_x_net/http2/server_test.go:2997 +0x11c
golang.org/x/net/http2.BenchmarkServerToClientStreamDefaultOptions(0xc0001dec88?)
/Users/rittneje/golang.org_x_net/http2/server_test.go:2958 +0x1a
testing.(*B).runN(0xc0001dec88, 0x1)
/Users/rittneje/go1.22.8/src/testing/benchmark.go:193 +0xf8
testing.(*B).run1.func1()
/Users/rittneje/go1.22.8/src/testing/benchmark.go:215 +0x4e
created by testing.(*B).run1 in goroutine 1
/Users/rittneje/go1.22.8/src/testing/benchmark.go:208 +0x90
exit status 2
FAIL golang.org/x/net/http2 0.316s
FAIL
```
```
$ go test -run=^$ -bench=BenchmarkServerToClientStreamReuseFrames ./http2
goos: darwin
goarch: amd64
pkg: golang.org/x/net/http2
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkServerToClientStreamReuseFrames-16 panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xe1016c7]
goroutine 14 [running]:
golang.org/x/net/http2.(*synctestGroup).idle(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:84 +0x47
golang.org/x/net/http2.(*synctestGroup).Wait(0x0)
/Users/rittneje/golang.org_x_net/http2/sync_test.go:75 +0x2f
golang.org/x/net/http2.(*serverTester).sync(...)
/Users/rittneje/golang.org_x_net/http2/server_test.go:336
golang.org/x/net/http2.(*serverTester).greetAndCheckSettings(0xc0000f63c0, 0xe2ad888)
/Users/rittneje/golang.org_x_net/http2/server_test.go:440 +0x85
golang.org/x/net/http2.(*serverTester).greet(0xc0000f63c0)
/Users/rittneje/golang.org_x_net/http2/server_test.go:433 +0x35
golang.org/x/net/http2.benchmarkServerToClientStream(0xc0000eec88, {0xc000093f30, 0x1, 0x1})
/Users/rittneje/golang.org_x_net/http2/server_test.go:2997 +0x11c
golang.org/x/net/http2.BenchmarkServerToClientStreamReuseFrames(0xc0000eec88)
/Users/rittneje/golang.org_x_net/http2/server_test.go:2964 +0x57
testing.(*B).runN(0xc0000eec88, 0x1)
/Users/rittneje/go1.22.8/src/testing/benchmark.go:193 +0xf8
testing.(*B).run1.func1()
/Users/rittneje/go1.22.8/src/testing/benchmark.go:215 +0x4e
created by testing.(*B).run1 in goroutine 1
/Users/rittneje/go1.22.8/src/testing/benchmark.go:208 +0x90
exit status 2
FAIL golang.org/x/net/http2 0.316s
FAIL
```
### What did you expect to see?
They should all work, or be removed. | NeedsInvestigation | low | Critical |
2,658,901,843 | puppeteer | [Feature]: Easier automation of scrolling by touch | ### Minimal, reproducible example
```TypeScript
I want to imitate a real finger scroll up/down on puppeteer. I use the mobile device configuration on puppeteer.
I don't know which event it is on a browser possibly `wheel` ? so I tried this,
```
### Background
const puppeteer = require('puppeteer-extra')
var {KnownDevices} = require('puppeteer');
const iPhone = KnownDevices['iPhone 15 Pro'];
async function r()
{
var browserArray = []
let browser = await puppeteer.launch(
{ headless:false,
defaultViewport: {
width: 375,
height: 667,
isMobile: true,
}/**/
});
const page = await browser.newPage();
await page.emulate(iPhone);
async function wheeler()
{
await page.goto('https://www.google.com/search?q=facebook', {timeout: 0, waitUntil: 'domcontentloaded'});
await page.mouse.wheel({deltaY: -30});
}
wheeler();
}
r();
### Expectation
this doesnt do anything, I could use <a href="https://developer.mozilla.org/en-US/docs/Web/API/Element/scrollBy">ScrollBy</a> with puppeteer but I want to imitate a real finger press scroll event by a human.
### Reality
As an example, for mouse wheel on browser, I can listen to the event like this:
document.body.addEventListener("wheel", (e) => {
console.log("deltaY: " + e.deltaY)
});\
How to do the same on mobile browser with finger press scroll up and down ?
### Puppeteer configuration file (if used)
_No response_
### Puppeteer version
latest
### Node version
latest
### Package manager
npm
### Package manager version
latest
### Operating system
Windows | feature,P3 | low | Major |
2,658,908,426 | node | test_runner: `t.after` should respect `the first-in-last-out` principle like Golang's defer | ### Version
v22.10.0
### Platform
_No response_
### Subsystem
_No response_
### What steps will reproduce the bug?
### 1. Create a test file
```js
import fs from 'node:fs'
import path from 'node:path'
import test from 'node:test'
test('basic', async (t) => {
const testDir = path.join(import.meta.dirname, 'logs')
fs.mkdirSync(testDir, { recursive: true })
t.after(() => {
console.log('remove test dir')
fs.rmdirSync(testDir, { recursive: true })
})
fs.writeFileSync(path.join(testDir, 'test.log'), 'hello world!')
t.after(() => {
console.log('remove test file')
fs.unlinkSync(path.join(testDir, 'test.log'))
})
// do staff...
})
```
### How often does it reproduce? Is there a required condition?
None
### What is the expected behavior? Why is that the expected behavior?
`t.after` should follow the first-in, last-out principle
According to the above code, the file should be deleted first, then the directory
### What do you see instead?
```
✖ basic (7.4021ms)
Error: ENOENT: no such file or directory, unlink 'project\folder\logs\test.log'
at Object.unlinkSync (node:fs:1871:11)
at TestContext.<anonymous> (file:///path/to/test.test.mjs:19:12)
at TestHook.runInAsyncScope (node:async_hooks:211:14)
at TestHook.run (node:internal/test_runner/test:934:25)
at TestHook.run (node:internal/test_runner/test:1225:18)
at TestHook.run (node:internal/util:543:20)
at node:internal/test_runner/test:853:20
at async Test.runHook (node:internal/test_runner/test:851:7)
at async after (node:internal/test_runner/test:893:9)
at async Test.run (node:internal/test_runner/test:942:7) {
```
### Additional information
`the first-in-last-out` principle is more reasonable and practical. It is useful in many scenarios.
I'm not sure why it was designed in the form of a queue. Is there anything special about it? | feature request,test_runner | low | Critical |
2,658,924,495 | next.js | querystring is not working to alternates canonical and languages in generateMetadata | ### Link to the code that reproduces this issue
https://github.com/richg0ld/generate-metadata-bug
### To Reproduce
In Next.js 15, I am trying to use the query string in generateMetadata's alternates.canonical and alternates.languages, but it is not being applied.
How can I ensure that querystring is not removed?
The sample code is as follows.
```
export async function generateMetadata(): Promise<Metadata> {
return {
metadataBase: new URL('http://localhost:3000'),
title: t('title'),
alternates: {
canonical: '/',
languages: {
'x-default': '/',
en: '/?hl=en_US',
ko: '/?hl=ko_KR',
},
},
};
}
```
### Current vs. Expected behavior
The expected behavior is as follows,
```
<link rel="alternate" hreflang="x-default" href="http://localhost:3000">
<link rel="alternate" hreflang="en" href="http://localhost:3000/?hl=en_US">
<link rel="alternate" hreflang="ko" href="http://localhost:3000/?hl=ko_KR">
```
However, the result is as follows,
```
<link rel="alternate" hreflang="x-default" href="http://localhost:3000">
<link rel="alternate" hreflang="en" href="http://localhost:3000">
<link rel="alternate" hreflang="ko" href="http://localhost:3000">
```
### Provide environment information
```bash
nextjs:15.0.3
reactjs:19.0.0-rc
react-dom:19.0.0-rc
```
### Which area(s) are affected? (Select all that apply)
Metadata
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Other (Deployed)
### Additional context
It would be helpful to have an 'other' option for link tags, similar to the 'other' option for handling meta tags. For example, having a method like this would be useful.
```
...,
other: [
{
hreflang: "en",
rel: "alternate",
href="/?hl=en_US"
},
...
]
```
having a method like this would be useful. | bug,Metadata | low | Critical |
2,658,939,006 | pytorch | mutating a buffer to change its requires_grad-ness can cause RuntimeError | repro (from @fmassa):
```
from itertools import chain
from typing import Any, Dict, Optional, Union
import torch
from torch import nn
from torch.nn import functional as F
import os
os.environ["FI_EFA_SET_CUDA_SYNC_MEMOPS"] = "0"
os.environ["TORCHINDUCTOR_COMPILE_THREADS"] = "10"
def _parse_slurm_node_list(s: str):
import re
nodes = []
# Extract "hostname", "hostname[1-2,3,4-5]," substrings
p = re.compile(r"(([^\[]+)(?:\[([^\]]+)\])?),?")
for m in p.finditer(s):
prefix, suffixes = s[m.start(2) : m.end(2)], s[m.start(3) : m.end(3)]
for suffix in suffixes.split(","):
span = suffix.split("-")
if len(span) == 1:
nodes.append(prefix + suffix)
else:
width = len(span[0])
start, end = int(span[0]), int(span[1]) + 1
nodes.extend([prefix + f"{i:0{width}}" for i in range(start, end)])
return nodes
IS_DIST = True
if os.environ.get("RANK", ""):
rank = int(os.environ.get("RANK"))
local_rank = rank
world_size = int(os.environ['WORLD_SIZE'])
elif 'SLURM_PROCID' in os.environ:
rank = int(os.environ['SLURM_PROCID'])
local_rank = int(os.environ["SLURM_LOCALID"])
world_size = int(os.environ.get("SLURM_NTASKS"))
os.environ["MASTER_ADDR"] = _parse_slurm_node_list(os.environ["SLURM_JOB_NODELIST"])[0]
os.environ["MASTER_PORT"] = "24189"
else:
IS_DIST = False
local_rank = 0
if IS_DIST:
pg_options = torch.distributed.ProcessGroupNCCL.Options(is_high_priority_stream=True)
torch.distributed.init_process_group(backend="nccl", init_method="env://",
world_size=world_size, rank=rank, pg_options=pg_options)
torch.distributed.barrier()
torch.cuda.set_device(local_rank)
def get_tensor_model_parallel_group():
return torch.distributed.distributed_c10d._get_default_group()
def get_tensor_model_parallel_world_size():
return torch.distributed.get_world_size(group=get_tensor_model_parallel_group())
def get_tensor_model_parallel_group_name():
return get_tensor_model_parallel_group().group_name
def reduce_gradient_across_mp_group(grad):
torch.distributed.all_reduce(grad, group=get_tensor_model_parallel_group())
return grad
def init_with_mp_rank_seed(weight, init_method):
init_method(weight)
def all_to_all(x: torch.Tensor, split_dim: int, concat_dim: int):
mp_size = get_tensor_model_parallel_world_size()
if mp_size == 1:
return x
chunks = torch.cat(torch.tensor_split(x, mp_size, dim=split_dim), dim=0)
output_split_sizes = [chunks.shape[0] // mp_size] * mp_size
input_split_sizes = output_split_sizes
group_name = get_tensor_model_parallel_group_name()
output = torch.ops._c10d_functional_autograd.all_to_all_single(
chunks,
output_split_sizes,
input_split_sizes,
group_name,
)
output = torch.ops._c10d_functional.wait_tensor(output)
output = torch.cat(torch.tensor_split(output, mp_size, dim=0), dim=concat_dim)
return output
class AllReduceInBackward(torch.autograd.Function):
@staticmethod
def forward(ctx, x, group_name):
ctx.group_name = group_name
return x
@staticmethod
def backward(ctx, grad):
grad = torch.ops._c10d_functional.all_reduce(grad, "sum", ctx.group_name)
grad = torch.ops._c10d_functional.wait_tensor(grad)
return grad, None, None
def all_reduce_in_backward(x):
group_name = get_tensor_model_parallel_group_name()
return AllReduceInBackward.apply(x, group_name)
class ExpertsChoiceMOE(torch.nn.Module):
def __init__(
self,
dim: int,
hidden_dim: int,
ffn_dim_multiplier: float,
multiple_of: int,
non_linearity: str,
init_depth: Optional[int],
layer_id: int,
# MOE specific parameters
number_of_experts: int = 8, # more experts = more sparse params = better quality (at expense of memory)
capacity_factor: float = 1.0, # capacity factor determines how many tokens each expert can choose
auto_scale_F: bool = True, # if true, rescales hidden_dim such that number of activates params is same as equivalent dense layer
use_shared_expert: bool = True, # if true, creats a determinstic shared expert to be activated alongside the routed experts
pregate_moe: bool = False, # if true, applies the gate multiplier to the input into routed experts instead of output. this can save memory on backward pass
shared_expert_gate: bool = True, # if true, learns a parameter that determines how to mix shared and routed outputs, but requires more memory in the backward pass
eval_with_saved_stats: bool = False, # if true, uses saved gate stats as thresholds for eval
eval_threshold_std_mult: float = 0.0, # adjust threshold by multiple of standard deviation (for elastic compute)
):
super().__init__()
self.layer_id = layer_id
self.capacity_factor = capacity_factor
self.use_shared_expert = use_shared_expert
self.pregate_moe = pregate_moe
self.shared_expert_gate = shared_expert_gate
self.eval_with_saved_stats = eval_with_saved_stats
self.eval_threshold_std_mult = eval_threshold_std_mult
self.E = number_of_experts
self.swiglu = non_linearity == "swiglu"
# swiglu hidden dim factor multiplier (same #params as relu / gelu)
if self.swiglu:
hidden_dim = int(2 * hidden_dim / 3)
# custom dim factor multiplier
hidden_dim = int(ffn_dim_multiplier * hidden_dim)
if auto_scale_F:
hidden_dim = int(hidden_dim / (capacity_factor + int(use_shared_expert)))
# round hidden dimension to `multiple_of`
hidden_dim += -hidden_dim % multiple_of
mp_size = get_tensor_model_parallel_world_size()
assert (
number_of_experts % mp_size == 0
), f"number_of_experts ({number_of_experts}) must be divisible by mp_size ({mp_size})"
e = number_of_experts // mp_size
init_in_fn = lambda x: torch.nn.init.trunc_normal_(x)
init_out_fn = lambda x: torch.nn.init.trunc_normal_(x)
# moe layers
self.moe_w_in_eDF = nn.Parameter(torch.empty(e, dim, hidden_dim))
init_with_mp_rank_seed(self.moe_w_in_eDF, init_in_fn)
self.moe_w_out_eFD = nn.Parameter(torch.empty(e, hidden_dim, dim))
init_with_mp_rank_seed(self.moe_w_out_eFD, init_out_fn)
self.router_DE = nn.Parameter(torch.empty(dim, number_of_experts))
nn.init.normal_(self.router_DE, mean=0.0, std=0.8 * dim**-0.5)
if use_shared_expert:
self.w_in_shared_DF = nn.Parameter(torch.empty(dim, hidden_dim))
init_in_fn(self.w_in_shared_DF)
self.w_out_shared_FD = nn.Parameter(torch.empty(hidden_dim, dim))
init_out_fn(self.w_out_shared_FD)
if self.shared_expert_gate:
self.shared_gate_1 = nn.Parameter(torch.full((1,), fill_value=0.0))
if self.swiglu:
self.moe_w_swiglu_eDF = nn.Parameter(torch.empty(e, dim, hidden_dim))
init_with_mp_rank_seed(self.moe_w_swiglu_eDF, init_in_fn)
if use_shared_expert:
self.w_swiglu_DF = nn.Parameter(torch.empty(dim, hidden_dim))
init_in_fn(self.w_swiglu_DF)
# non-linearity
self.non_linearity = {
"relu": F.relu,
"gelu": F.gelu,
"swiglu": None,
"srelu": lambda x: F.relu(x) ** 2,
# "silu": F.silu,
# "mish": F.mish,
# "swish": swish,
}[non_linearity]
# sum, squared sum, count
self.register_buffer("running_gate_stats_3E", torch.zeros(3, number_of_experts))
self.running_gate_ema = 0.99
# have to do it here before wrapping so we can actually access params
self.repr_str = ""
for n, p in chain(self.named_parameters(), self.named_buffers()):
self.repr_str += f"{n}: {p.shape}\n"
def _add_grad_reduce_hooks(self):
if get_tensor_model_parallel_world_size() <= 1:
return
self.router_DE.register_hook(reduce_gradient_across_mp_group)
if self.use_shared_expert:
self.w_in_shared_DF.register_hook(reduce_gradient_across_mp_group)
self.w_out_shared_FD.register_hook(reduce_gradient_across_mp_group)
if self.shared_expert_gate:
self.shared_gate_1.register_hook(reduce_gradient_across_mp_group)
if self.swiglu:
self.w_swiglu_DF.register_hook(reduce_gradient_across_mp_group)
def forward(self, x_aD):
a, D = x_aD.shape
E = self.E
# FIXME(fmassa): I replaced those backward hooks with all_reduce_in_backward. Could also have used DTensor
# self._add_grad_reduce_hooks()
# get router scores
router_DE = all_reduce_in_backward(self.router_DE)
router_scores_Ea = torch.einsum("aD,DE->Ea", x_aD, router_DE)
router_scores_Ea = torch.sigmoid(router_scores_Ea)
tokens_per_expert = int(a * self.capacity_factor / E)
tokens_per_expert += -tokens_per_expert % 8 # round to multiple of 8
stats_handle = None
if self.training or not self.eval_with_saved_stats:
router_scores_Eg, router_indices_Eg = torch.topk(
router_scores_Ea, tokens_per_expert, dim=1
)
# update running stats
self.running_gate_stats_3E.mul_(self.running_gate_ema)
# FIXME(fmassa): I had to add the .detach() here otherwise the bwd would complain with aot_eager
#min_thresh_E = router_scores_Eg.min(dim=1, keepdim=False).values.detach()
min_thresh_E = router_scores_Eg.min(dim=1, keepdim=False).values
self.running_gate_stats_3E[0] += min_thresh_E
self.running_gate_stats_3E[1] += min_thresh_E**2
self.running_gate_stats_3E[2] += 1
if torch.distributed.is_initialized():
stats_handle = torch.distributed.all_reduce(
self.running_gate_stats_3E,
op=torch.distributed.ReduceOp.AVG,
# async_op=True,
async_op=False,
)
else:
count = self.running_gate_stats_3E[2]
assert count > 0
mean_E = self.running_gate_stats_3E[0] / count
std_E = torch.sqrt(self.running_gate_stats_3E[1] / count - mean_E**2)
threshold_E = mean_E + std_E * self.eval_threshold_std_mult
router_scores_Eg = torch.where(
router_scores_Ea >= threshold_E.unsqueeze(1),
router_scores_Ea,
torch.zeros_like(router_scores_Ea),
)
router_indices_Eg = (
torch.arange(a, device=x_aD.device).view(1, -1).expand(E, -1)
)
routed_in_Eg_D = x_aD.gather(
dim=0, index=router_indices_Eg.view(-1, 1).expand(-1, D)
)
if self.pregate_moe:
routed_in_Eg_D = routed_in_Eg_D * router_scores_Eg.view(-1, 1)
routed_in_eGD = all_to_all(
routed_in_Eg_D.view(E, tokens_per_expert, D), split_dim=0, concat_dim=1
)
routed_hidden_eGF = torch.einsum(
"eGD,eDF->eGF", routed_in_eGD, self.moe_w_in_eDF
)
if self.swiglu:
swiglu_hidden_eGF = torch.einsum(
"eGD,eDF->eGF", routed_in_eGD, self.moe_w_swiglu_eDF
)
routed_middle_eGF = F.silu(routed_hidden_eGF) * swiglu_hidden_eGF
else:
routed_middle_eGF = self.non_linearity(routed_hidden_eGF)
routed_out_eGD = torch.einsum(
"eGF,eFD->eGD", routed_middle_eGF, self.moe_w_out_eFD
)
routed_out_EgD = all_to_all(routed_out_eGD, split_dim=1, concat_dim=0)
if not self.pregate_moe:
routed_out_EgD = routed_out_EgD * router_scores_Eg.view(
E, tokens_per_expert, 1
)
routed_out_aD = torch.zeros_like(x_aD)
router_indices_Eg_D = router_indices_Eg.view(-1, 1).expand(-1, D)
routed_out_aD.scatter_add_(
dim=0, index=router_indices_Eg_D, src=routed_out_EgD.view(-1, D)
)
out_aD = routed_out_aD
# try to overlap this computation with comms from above
if self.use_shared_expert:
w_in_shared_DF = all_reduce_in_backward(self.w_in_shared_DF)
shared_middle_aF = x_aD @ w_in_shared_DF
if self.swiglu:
w_swiglu_DF = all_reduce_in_backward(self.w_swiglu_DF)
swiglu_hidden_aF = x_aD @ w_swiglu_DF
shared_middle_aF = F.silu(shared_middle_aF) * swiglu_hidden_aF
else:
shared_middle_aF = self.non_linearity(shared_middle_aF)
w_out_shared_FD = all_reduce_in_backward(self.w_out_shared_FD)
shared_out_aD = shared_middle_aF @ w_out_shared_FD
if self.shared_expert_gate:
shared_gate_1 = all_reduce_in_backward(self.shared_gate_1)
shared_gate_1 = torch.sigmoid(shared_gate_1)
out_aD = shared_out_aD * shared_gate_1 + routed_out_aD * (
1 - shared_gate_1
)
else:
out_aD = routed_out_aD + shared_out_aD
# FIXME(fmassa): I made the all_reduce above be sync, so this is not needed anymore
# if stats_handle:
# stats_handle.wait()
return out_aD
def extra_repr(self) -> str:
return self.repr_str
dim = 512
ffn_exp = 4.0
ffn_dim_multiplier = 1.0
multiple_of = 256
model = ExpertsChoiceMOE(
dim=dim, hidden_dim=int(ffn_exp * dim),
ffn_dim_multiplier=ffn_dim_multiplier,
multiple_of=multiple_of, non_linearity="swiglu",
init_depth=None, layer_id=0)
dtype = torch.bfloat16
device = torch.device("cuda")
model.to(device, dtype)
#print(model)
x = torch.rand(4096, 512, dtype=dtype, device=device)
#out = model(x)
#out.sum().backward()
print("pass 0")
#model = torch.compile(model, backend="aot_eager")
model = torch.compile(model)
out = model(x)
out.sum().backward()
out = model(x)
out.sum().backward()
print("pass")
```
Note that the error only happens when invoking the fw/bw twice. It fails with:
```
[rank0]: File "/home/hirsheybar/local/b/pytorch/torch/autograd/graph.py", line 825, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: File "/home/hirsheybar/local/b/pytorch/torch/autograd/function.py", line 307, in apply
[rank0]: return user_fn(self, *args)
[rank0]: File "/home/hirsheybar/local/b/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1692, in backward
[rank0]: all_args = CompiledFunction._backward_prologue(ctx, *flat_args)
[rank0]: File "/home/hirsheybar/local/b/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1838, in _backward_prologue
[rank0]: ctx_saved_tensors = ctx.saved_tensors
[rank0]: RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
```
In the repro code, we have a buffer (which does not require grad), that we mutate with a tensor that requires grad. Tweaking the repro to use:
```
min_thresh_E = router_scores_Eg.min(dim=1, keepdim=False).values.detach()
```
fixes the error under compile, although this change wasn't necessary under eager
cc @ezyang @chauhang @penguinwu @zou3519 @yf225 | triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,658,959,541 | godot | Inconsistent sorting order with numbered and unnumbered files | ### Tested versions
4.4 dev4
### System information
Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 31.0.15.4633) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 Threads)
### Issue description

Why is icon.svg listed as last?
Windows Explorer for comparison

GIMP

### Steps to reproduce
1. Duplicate `icon.svg` into `icon2.svg` and `icon3.svg` etc.
2. 👀
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,usability | low | Minor |
2,658,964,940 | langchain | Issue with Ollama Function – Only Agent Response, No Tool Calls | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
from langchain_core.messages import AIMessage
from langgraph.graph import END, StateGraph
from typing import TypedDict
from langchain_core.tools import tool
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.prompts import PromptTemplate
from termcolor import colored
import json
# Import tools from tools.py
from tools import get_current_weather, get_system_time
# using OllamaFunctions from experimental because it supports function binding with llms
model = OllamaFunctions(
base_url="http://localhost:11436",
model="llama3.1",
format="json"
)
model_with_tools = model.bind_tools(
tools=[get_current_weather, get_system_time],
)
tool_mapping = {
'get_current_weather': get_current_weather,
'get_system_time': get_system_time,
}
# Define Agent Prompt template for llama3
agent_request_generator_prompt = PromptTemplate(
template=
"""
<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>
You are a Smart Agent.
You are a master at understanding what a customer wants.
You evaluate every request and utilize available tools if you have to.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
Conduct a comprehensive analysis of the request provided\
USER REQUEST:\n\n {initial_request} \n\n
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
""",
input_variables=["initial_request"],
)
agent_request_generator = agent_request_generator_prompt | model_with_tools
# result = agent_request_generator.invoke({"initial_request": "What is the weather in woodbury in MN?"})
# print(result)
# input("...")
# Pydantic Schema for structured response
class Evaluation(BaseModel):
result: bool = Field(description="True or False", required=True)
# Prompt template llama3
category_generator_prompt = PromptTemplate(
template=
"""
<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>
You are a Smart Router Agent. You are a master at reviewing whether the original question that customer asked was answered in the tool response.
You understand the context and question below and return your answer in JSON.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
CONTEXT: Conduct a comprehensive analysis of the Initial Request from user and Tool Response and route the request into boolean true or false:
True - used when INITIAL REQUEST appears to be answered by TOOL RESPONSE. \
False - used when INITIAL REQUEST is not answered by TOOL RESPONSE or when TOOL RESPONSE is empty \
Output either True or False \
eg:
'True' \n\n
INITIAL REQUEST:\n\n {research_question} \n\n
TOOL RESPONSE:\n\n {tool_response} \n\n
JSON:
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
""",
input_variables=["research_question", "tool_response"],
)
structured_llm = model.with_structured_output(Evaluation)
category_generator = category_generator_prompt | structured_llm
# result = category_generator.invoke({"research_question": "What is the weather in woodbury in MN?", "tool_response":"65C, Sunny"})
# print(result)
# input("...")
class AgentState(TypedDict):
research_question: str
tool_response: str
agent_response: AIMessage
agent_call_count: int = 0
tool_call_count: int = 0
def agent(state: AgentState):
print(colored("STATE at agent start:", "magenta"), colored(state, "cyan"))
input("Paused ... Hit Enter to Execute Agent Logic...")
last_ai_message = agent_request_generator.invoke({"initial_request": state["research_question"]})
state["agent_call_count"] += 1
#append the response to the agent_response list in the state
if last_ai_message is not None:
state["agent_response"] = last_ai_message
if last_ai_message.content is not None and last_ai_message.content != "" :
state["tool_response"]=last_ai_message.content
print(colored("STATE at agent end:", "magenta"), colored(state, "cyan"))
input("Paused Hit Enter to go to Should Continue Logic...")
return state
def should_continue(state: AgentState):
print(colored("STATE at should_continue start:", "magenta"), colored(state, "cyan"))
input("Paused at Should Continue Start")
print(colored("Evaluating whether the Question is Answered by the tool response or not... Please wait...", "red"))
result = category_generator.invoke({"research_question": state["research_question"],
"tool_response":state["tool_response"]
})
if isinstance(result, Evaluation):
# Access the 'result' attribute from Evaluation
print(colored("Is tool response good and should the flow go to END node? ", "cyan"), colored(result.result, "yellow"))
input("Paused at Should Continue Mid")
if result.result: # If result is True
print(colored("Return end", "red"))
return "end"
else: # If result is False
print(colored("Return continue", "green"))
return "continue"
else:
print("Result is not an Evaluation instance, returning 'end' as default.")
return "end"
def call_tool(state: AgentState):
print(colored("STATE at call_tool start:", "magenta"), colored(state, "cyan"))
input("Paused at call_tool Start")
agent_response = state["agent_response"]
if hasattr(agent_response, 'tool_calls') and len(agent_response.tool_calls) > 0:
tool_call = agent_response.tool_calls[0]
tool = tool_mapping[tool_call["name"].lower()]
try:
tool_output = tool.invoke(tool_call["args"])
state["tool_call_count"] += 1
print(colored("Tool output:", "magenta"), colored(tool_output, "green"))
if tool_output is not None:
state["tool_response"] = tool_output
except Exception as e:
print(f"Error invoking tool: {e}")
# Handle the error or log it as needed
else:
print("No tool calls found in agent response.")
print(colored("STATE at call_tool end:", "magenta"), colored(state, "cyan"))
input("Paused at call_tool End")
return state
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent)
workflow.add_node("action", call_tool)
workflow.set_entry_point("agent")
workflow.add_conditional_edges(
"agent",
should_continue,
{
"continue": "action",
"end": END,
},
)
workflow.add_edge("action", "agent")
app = workflow.compile()
#helper method to visualize graph
def save_graph_to_file(runnable_graph, output_file_path):
png_bytes = runnable_graph.get_graph().draw_mermaid_png()
with open(output_file_path, 'wb') as file:
file.write(png_bytes)
save_graph_to_file(app, "output-05.png")
# research_question = "What's the current system time?"
research_question = "Tell me a joke?"
# research_question = "How is the weather in Woodbury MN today?"
# research_question = "What is the cause of earthquakes?"
state : AgentState = {"research_question": research_question,
"tool_response": [] ,
"agent_response": [],
"agent_call_count": 0,
"tool_call_count": 0
}
result = app.invoke(state)
print("\n")
print(colored("FINAL STATE at end:", "magenta"), colored(result, "cyan"))
print(colored("FINAL RESPONSE at end:", "magenta"), colored(result["tool_response"], "cyan"))
```
### Error Message and Stack Trace (if applicable)
```shell
STATE at agent start: {'research_question': 'Tell me a joke?', 'tool_response': [], 'agent_response': [], 'agent_call_count': 0, 'tool_call_count': 0}
Paused ... Hit Enter to Execute Agent Logic...
STATE at agent end: {'research_question': 'Tell me a joke?', 'tool_response': "Here's one: Why couldn't the bicycle stand up by itself? Because it was two-tired!", 'agent_response': AIMessage(content="Here's one: Why couldn't the bicycle stand up by itself? Because it was two-tired!", additional_kwargs={}, response_metadata={}, id='run-409504c1-afb2-49f7-8ace-bc08c89fdd33-0'), 'agent_call_count': 1, 'tool_call_count': 0}
Paused Hit Enter to go to Should Continue Logic...
STATE at should_continue start: {'research_question': 'Tell me a joke?', 'tool_response': "Here's one: Why couldn't the bicycle stand up by itself? Because it was two-tired!", 'agent_response': AIMessage(content="Here's one: Why couldn't the bicycle stand up by itself? Because it was two-tired!", additional_kwargs={}, response_metadata={}, id='run-409504c1-afb2-49f7-8ace-bc08c89fdd33-0'), 'agent_call_count': 1, 'tool_call_count': 0}
Paused at Should Continue Start
Evaluating whether the Question is Answered by the tool response or not... Please wait...
Traceback (most recent call last):
File "/home/poc/agentic-workflow/langgraph-learning/tutorials/example_1.py", line 203, in <module>
result = app.invoke(state)
^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1749, in invoke
for chunk in self.stream(
File "/home/miniconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1477, in stream
for _ in runner.tick(
File "/home/miniconda3/lib/python3.11/site-packages/langgraph/pregel/runner.py", line 58, in tick
run_with_retry(t, retry_policy)
File "/home/miniconda3/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 29, in run_with_retry
task.proc.invoke(task.input, config)
File "/home/miniconda3/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 412, in invoke
input = context.run(step.invoke, input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 184, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langgraph/graph/graph.py", line 95, in _route
result = self.path.invoke(value, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 176, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/poc/agentic-workflow/langgraph-learning/tutorials/example_1.py", line 121, in should_continue
result = category_generator.invoke({"research_question": state["research_question"],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5354, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/home/miniconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/home/miniconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/home/miniconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langchain_experimental/llms/ollama_functions.py", line 305, in _generate
functions = [convert_to_ollama_tool(fn) for fn in functions]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langchain_experimental/llms/ollama_functions.py", line 305, in <listcomp>
functions = [convert_to_ollama_tool(fn) for fn in functions]
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/lib/python3.11/site-packages/langchain_experimental/llms/ollama_functions.py", line 86, in convert_to_ollama_tool
schema = tool.model_construct().model_json_schema()
^^^^^^^^^^^^^^^^^^^^
AttributeError: type object 'Evaluation' has no attribute 'model_construct'
```
### Description
I have encountered an issue where the ollama function returns only the agent's response but does not invoke any associated tools, as expected. I have ensured that all required libraries and dependencies are properly installed, but this behavior persists.
### System Info
"pip install langgraph httpx ipython graphviz langchain langchain_openai langchain_experimental langchain-ollama termcolor rich ollama openai docker"
platform (linux)
python version 3.11
langgraph==0.2.46
langchain==0.2.17
langchain-experimental==0.0.65
langchain-core==0.2.43 | 🤖:bug | low | Critical |
2,658,980,440 | kubernetes | Deployment controller: Inconsistency of deletePod pod update handler and oldPodsRunning condition | ### What happened?
We observed that in cluster with phase=Failed pods present, the update of the deployment using Recreate strategy was stalling for ~10 minutes (ProgressDeadlineSeconds period).
After a log analysis and code, I think this is caused by the fact that [deletePod handler](https://github.com/kubernetes/kubernetes/blob/74e84a90c725047b1328ff3d589fedb1cb7a120e/pkg/controller/deployment/deployment_controller.go#L385-L391) (the handler attached to pod informer), for Recreate case is enqueuing the deployment only if the number of pods equals zero. OTOH if the loop triggers, the different condition is checked: [oldPodsRunning](https://github.com/kubernetes/kubernetes/blob/74e84a90c725047b1328ff3d589fedb1cb7a120e/pkg/controller/deployment/recreate.go#L48-L51) ignores the terminal state pods such as Failed/Succeeded.
In effect, we may not enqueue deployment at the time when oldPodsRunning becomes true in a case when e.g. Succeeded pod is present.
### What did you expect to happen?
That the deployment is enqueued at the time when oldPodsRunning becomes true -- the last pod is deleted.
### How can we reproduce it (as minimally and precisely as possible)?
* Create a deployment with a pod in Failed state (we used a deployment with image that OOMs quite often)
* Try to update deployment few times
* You will observe that the updates starts to be delayed by ~10m at some point.
### Anything else we need to know?
A proposed fix is to change the deletePod to exclude the Final/Succeed state or even better share the logic with the oldPodsRunning condition (maybe even call it from there).
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.30.5, but the code looks like this also in master branch.
</details>
### Cloud provider
<details>
gke
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/apps,needs-triage | low | Critical |
2,659,045,146 | vscode | Tab With Problems - Improve Accessibility for Colorblind People | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
After several years of using VSCode as a colorblind guy, I realised that tabs also change colors when the file has errors 😅
Tabs that have uncommitted changes are signaled with this text color:

Tabs with problems are signaled with this text color:

Those text colors are very similar for colorblind people.
### Proposed solutions
- My favorite solution: Displaying a custom icon "⚠️" on the tab, instead of showing the file type icon (VueJS in our example).
Here is an example of the Problems tab icon when placed in the right sidebar (alongside copilot):

So this icon could be used as the file icon in the tab, when a file has problems:

- With bold text when there is an error.
- With colors that have a higher luminance gap between them. (That you could differentiate easily even if you applied a grayscale filter on your screen.)
- With a red bar instead of a blue bar on the top of the tab (this solution will probably not be enough on its own).

### Short term mitigations
I moved the problem tab away for the bottom bar to the left sidebar.
Problems display on the bottom bar was very discreet (grey color for the tag, and away from the eyes at the bottom of the screen):

It is more obvious in the sidebar:

| feature-request,accessibility,workbench-tabs | low | Critical |
2,659,089,462 | deno | rootDirs and SvelteKit | deno 2.0.6 (stable, release, aarch64-apple-darwin)
v8 12.9.202.13-rusty
typescript 5.6.2
Hi,
I am using SvelteKit and I am trying to reproduce [this example](https://svelte.dev/docs/kit/load#Making-fetch-requests):
```ts
import type { PageLoad } from './$types';
export const load: PageLoad = async ({ fetch, params }) => {
const res = await fetch(`/api/items/${params.id}`);
const item = await res.json();
return { item };
};
```
But I have this error:
<img width="1126" alt="Screenshot 2024-11-14 at 9 53 54 AM" src="https://github.com/user-attachments/assets/b1fb5624-bd27-43f2-b204-cb4771829c47">
From what I understand, it's maybe because the default SvelteKit `tsconfig.json` has the `rootDirs` options set to `["..", "./types"]` but this is not supported by Deno and/or the Deno VSCode extension.
```ts
{
"extends": "./.svelte-kit/tsconfig.json",
"compilerOptions": {
"allowJs": true,
"checkJs": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"skipLibCheck": true,
"sourceMap": true,
"strict": true,
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"rootDirs": ["..", "./types"]
}
// Path aliases are handled by https://svelte.dev/docs/kit/configuration#alias
// except $lib which is handled by https://svelte.dev/docs/kit/configuration#files
//
// If you want to overwrite includes/excludes, make sure to copy over the relevant includes/excludes
// from the referenced tsconfig.json - TypeScript does not merge them in
}
```
Thank you! | feat,tsc | low | Critical |
2,659,094,592 | vscode | Bash shellIntegration: error messages when running with `set -u` | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.1
- OS Version: Debian GNU/Linux trixie (testing), in ChromeOS Crostini container
Steps to Reproduce:
1. Add `set -u` to `~/.bashrc`
2. Launch a new Bash terminal in VS-Code
3. Observe error messages produced by the shellIntegration code, e.g. "bash: VSCODE_INJECTION: unbound variable".
| bug,terminal-shell-bash | low | Critical |
2,659,110,117 | go | x/net/icmp: checksums are not checked when parsing a message | ### Go version
go version go1.23.2 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='amd64'
GOBIN=''
GOCACHE='/root/.cache/go-build'
GOENV='/root/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.2'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/root/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='0'
GOMOD='/workspaces/gluetun/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3015057780=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
I'm working with the `icmp` package, and I noticed the [`ParseMessage`](https://cs.opensource.google/go/x/net/+/refs/tags/v0.31.0:icmp/message.go;l=139) function only stores the checksum in the returned message. There is no check using that checksum, nor any function available to run this checksum check.
On the other hand, in the [`Marshal` method](https://cs.opensource.google/go/x/net/+/refs/tags/v0.31.0:icmp/message.go;l=78), the checksum is calculated.
### What did you see happen?
No checksum check done when decoding an icmp message.
### What did you expect to see?
A checksum check done when decoding an icmp message.
Since this check is rather cheap to run on a tiny amount of data (the icmp header), I think we could just have it as a part of the parsing function, and NOT export the checksum function. | NeedsInvestigation | low | Critical |
2,659,113,081 | pytorch | torch.where(condition) throws an error during export | ### 🐛 Describe the bug
Using `torch.export` on a module with `torch.where` throws an error. Use the following code snippet to reproduce the error
```python
import torch
class ModuleWithWhere(torch.nn.Module):
def forward(self, x):
return torch.where(x > 0)
class ModuleWithNonzero(torch.nn.Module):
def forward(self, x):
return torch.nonzero(x > 0, as_tuple=True)
x = torch.rand(2, 10)
mod_nonzero = ModuleWithNonzero()
mod_where = ModuleWithWhere()
dynamic_shapes = ({0: torch.export.Dim("batch_size", max=100)},)
with torch.inference_mode():
torch.export.export(mod_nonzero, (x,), strict=False, dynamic_shapes=dynamic_shapes) # passes
torch.export.export(mod_where, (x,), strict=False, dynamic_shapes=dynamic_shapes) # fails
```
Error Logs with `TORCH_LOGS="+dynamo"` and `TORCHDYNAMO_VERBOSE=1`
```bash
V1114 06:59:08.253000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:2498] create_env
I1114 06:59:08.427000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3557] create_symbol s0 = 2 for L['args'][0][0].size()[0] [2, 100] (_export/non_strict_utils.py:109 in fakify), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
V1114 06:59:08.428000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert True == True [statically known]
V1114 06:59:08.429000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval False == False [statically known]
I1114 06:59:08.431000 2197826 site-packages/torch/_dynamo/utils.py:859] ChromiumEventLogger initialized with id 3870d3ae-2cd4-4175-b365-d460e4974935
V1114 06:59:08.469000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval False == False [statically known]
V1114 06:59:08.470000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval True == True [statically known]
V1114 06:59:08.472000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval Ne(s0, 1) == True [statically known]
V1114 06:59:08.473000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert True == True [statically known]
V1114 06:59:08.474000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert True == True [statically known]
I1114 06:59:08.476000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3317] create_unbacked_symint u0 [-int_oo, int_oo] (_subclasses/fake_impls.py:426 in nonzero)
V1114 06:59:08.477000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:4734] _update_var_to_range u0 = VR[0, 9223372036854775806] (update)
I1114 06:59:08.477000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5481] constrain_symbol_range u0 [0, 9223372036854775806]
V1114 06:59:08.479000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert u0 >= 0 == True [statically known]
V1114 06:59:08.481000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert u0 >= 0 == True [statically known]
V1114 06:59:08.482000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval Eq(u0, 0) == False [statically known]
V1114 06:59:08.496000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert True == True [statically known]
I1114 06:59:08.501000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3317] create_unbacked_symint u1 [-int_oo, int_oo] (_subclasses/fake_impls.py:426 in nonzero)
V1114 06:59:08.502000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:4734] _update_var_to_range u1 = VR[0, 9223372036854775806] (update)
I1114 06:59:08.502000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5481] constrain_symbol_range u1 [0, 9223372036854775806]
V1114 06:59:08.503000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert u1 >= 0 == True [statically known]
V1114 06:59:08.505000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert u1 >= 0 == True [statically known]
V1114 06:59:08.506000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval Eq(u1, 0) == False [statically known]
I1114 06:59:08.509000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:604] compute_unbacked_bindings [u1]
V1114 06:59:08.511000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval Ne(u1, 1) == True [statically known]
I1114 06:59:08.524000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3646] produce_guards
V1114 06:59:08.525000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3830] track_symint L['args'][0][0].size()[0] s0 StrictMinMaxConstraint(warn_only=False, vr=VR[0, 100])
V1114 06:59:08.525000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3830] track_symint L['args'][0][0].size()[1] 10 None
V1114 06:59:08.525000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3830] track_symint L['args'][0][0].stride()[0] 10 None
V1114 06:59:08.526000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3830] track_symint L['args'][0][0].stride()[1] 1 None
V1114 06:59:08.526000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3830] track_symint L['args'][0][0].storage_offset() 0 None
V1114 06:59:08.535000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert u1 >= 0 == True [statically known]
V1114 06:59:08.540000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:2498] create_env
I1114 06:59:08.541000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3557] create_symbol s0 = 2 for L['args'][0][0].size()[0] [2, 100] (_export/non_strict_utils.py:109 in fakify), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
V1114 06:59:08.542000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert True == True [statically known]
V1114 06:59:08.543000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval False == False [statically known]
V1114 06:59:08.548000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval False == False [statically known]
V1114 06:59:08.549000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval True == True [statically known]
V1114 06:59:08.550000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval Ne(s0, 1) == True [statically known]
V1114 06:59:08.550000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert True == True [statically known]
V1114 06:59:08.551000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert True == True [statically known]
I1114 06:59:08.553000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:3317] create_unbacked_symint u0 [-int_oo, int_oo] (_subclasses/fake_impls.py:426 in nonzero)
V1114 06:59:08.554000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:4734] _update_var_to_range u0 = VR[0, 9223372036854775806] (update)
I1114 06:59:08.554000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5481] constrain_symbol_range u0 [0, 9223372036854775806]
V1114 06:59:08.555000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert u0 >= 0 == True [statically known]
V1114 06:59:08.556000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert u0 >= 0 == True [statically known]
V1114 06:59:08.557000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5201] eval Eq(u0, 0) == False [statically known]
V1114 06:59:08.566000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:5358] runtime_assert True == True [statically known]
Traceback (most recent call last):
File "/home/lliebenwein/dev/auto-deploy/where_test.py", line 21, in <module>
torch.export.export(mod_where, (x,), strict=False, dynamic_shapes=dynamic_shapes) # fails
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/export/__init__.py", line 270, in export
return _export(
^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/export/_trace.py", line 1017, in wrapper
raise e
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/export/_trace.py", line 990, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/export/exported_program.py", line 114, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/export/_trace.py", line 1880, in _export
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/export/_trace.py", line 1683, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/export/_trace.py", line 637, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/export/_trace.py", line 1611, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1246, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1480, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 106, in aot_dispatch_export
graph, _, _ = aot_dispatch_base_graph(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 154, in aot_dispatch_base_graph
fw_module = _create_graph(
^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 54, in _create_graph
fx_g = make_fx(
^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 2110, in wrapped
return make_fx_tracer.trace(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 2048, in trace
return self._trace_inner(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 2034, in _trace_inner
t = dispatch_trace(
^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 1127, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 1631, in trace
res = super().trace(root, concrete_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/_symbolic_trace.py", line 823, in trace
(self.create_arg(fn(*args)),),
^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 1182, in wrapped
out = f(*tensors)
^^^^^^^^^^^
File "<string>", line 1, in <lambda>
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 693, in inner_fn
outs = fn(*args)
^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 413, in _functionalized_f_helper
f_outs = fn(*f_args)
^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 78, in inner_fn
outs = fn(*args)
^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 182, in flat_fn
tree_out = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 863, in functional_call
out = mod(*args[params_len:], **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/_symbolic_trace.py", line 801, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 1701, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/_symbolic_trace.py", line 519, in call_module
ret_val = forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/_symbolic_trace.py", line 794, in forward
return _orig_module_call(mod, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/export/_trace.py", line 1598, in forward
tree_out = self._export_root(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/_symbolic_trace.py", line 801, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 1701, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/_symbolic_trace.py", line 519, in call_module
ret_val = forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/_symbolic_trace.py", line 794, in forward
return _orig_module_call(mod, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/dev/auto-deploy/where_test.py", line 6, in forward
return torch.where(x > 0)
^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 1230, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 1258, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_export/non_strict_utils.py", line 520, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_ops.py", line 833, in handler
return torch._library.utils.handle_dispatch_mode(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_library/utils.py", line 282, in handle_dispatch_mode
return curr_mode.__torch_dispatch__(op_overload, overload_types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_subclasses/functional_tensor.py", line 534, in __torch_dispatch__
outs_unwrapped = func._op_dk(
^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_ops.py", line 833, in handler
return torch._library.utils.handle_dispatch_mode(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_library/utils.py", line 282, in handle_dispatch_mode
return curr_mode.__torch_dispatch__(op_overload, overload_types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 1308, in __torch_dispatch__
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/fx/experimental/proxy_tensor.py", line 906, in proxy_call
out = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_ops.py", line 716, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1238, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1692, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1348, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1943, in _dispatch_impl
return decomposition_table[func](*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_prims_common/wrappers.py", line 273, in _fn
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_prims_common/wrappers.py", line 141, in _fn
result = fn(**bound.arguments)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lliebenwein/miniconda3/envs/auto/lib/python3.12/site-packages/torch/_refs/__init__.py", line 1926, in where
raise NotImplementedError
NotImplementedError
I1114 06:59:08.591000 2197826 site-packages/torch/_dynamo/utils.py:399] TorchDynamo compilation metrics:
I1114 06:59:08.591000 2197826 site-packages/torch/_dynamo/utils.py:399] Function Runtimes (s)
I1114 06:59:08.591000 2197826 site-packages/torch/_dynamo/utils.py:399] ------------------------------ --------------
I1114 06:59:08.591000 2197826 site-packages/torch/_dynamo/utils.py:399] create_aot_dispatcher_function 0.0919
V1114 06:59:08.591000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats constrain_symbol_range: CacheInfo(hits=3, misses=6, maxsize=None, currsize=6)
V1114 06:59:08.591000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats evaluate_expr: CacheInfo(hits=53, misses=12, maxsize=256, currsize=12)
V1114 06:59:08.592000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1114 06:59:08.592000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1114 06:59:08.592000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _find: CacheInfo(hits=37, misses=5, maxsize=None, currsize=5)
V1114 06:59:08.592000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1114 06:59:08.593000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1114 06:59:08.593000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats simplify: CacheInfo(hits=6, misses=13, maxsize=None, currsize=13)
V1114 06:59:08.593000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1114 06:59:08.593000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats replace: CacheInfo(hits=1674, misses=60, maxsize=None, currsize=60)
V1114 06:59:08.593000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=16, misses=19, maxsize=None, currsize=19)
V1114 06:59:08.594000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1114 06:59:08.594000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats get_axioms: CacheInfo(hits=10, misses=9, maxsize=None, currsize=9)
V1114 06:59:08.594000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats safe_expand: CacheInfo(hits=313, misses=35, maxsize=256, currsize=35)
V1114 06:59:08.594000 2197826 site-packages/torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats uninteresting_files: CacheInfo(hits=38, misses=1, maxsize=None, currsize=1)
```
Note that only the combination of using `torch.inference_mode` and specifying `dynamic_shapes` causes the error. Leaving out either won't throw an error. Writing the same op with `torch.nonzero(..., as_tuple=True)` passes.
I guess it's a known issue though since I can see that there is a `TODO` in the source code to implement `torch.where(condition)` where the error is thrown: https://github.com/pytorch/pytorch/blob/main/torch/_refs/__init__.py#L1926
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.12.5 | packaged by conda-forge | (main, Aug 8 2024, 18:36:51) [GCC 12.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz
Stepping: 7
CPU MHz: 3402.048
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6999.82
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 12 MiB
L3 cache: 19.3 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @ezyang @chauhang @penguinwu @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,module: dynamic shapes,oncall: export | low | Critical |
2,659,114,748 | pytorch | after around 13 iteration, tensor img_syn contains many "NaN" value when trained on cifari10, but on minist dataset, it works fine | ### 🐛 Describe the bug
''' update synthetic data '''
if 'BN' not in args.dlmodel: # for ConvNet
loss = torch.tensor(0.0).to(args.dldevice)
for c in range(num_classes):
img_real = get_images(c, args.dlbatch_real)
img_syn = image_syn[c*args.dlipc:(c+1)*args.dlipc].reshape((args.dlipc, channel, im_size[0], im_size[1]))
if args.dldsa:
seed = int(time.time() * 1000) % 100000
img_real = DiffAugment(img_real, args.dldsa_strategy, seed=seed, param=args.dldsa_param)
img_syn = DiffAugment(img_syn, args.dldsa_strategy, seed=seed, param=args.dldsa_param)
output_real = embed(img_real).detach()
output_syn = embed(img_syn)
output_real=output_real.to(args.dldevice)
output_syn=output_syn.to(args.dldevice)
loss += torch.sum((torch.mean(output_real, dim=0) - torch.mean(output_syn, dim=0)).to(args.dldevice)**2)
### Versions
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:46:43) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 D
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9754 128-Core Processor
CPU family: 25
Model: 160
Thread(s) per core: 2
Core(s) per socket: 128
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 3100.3411
CPU min MHz: 1500.0000
BogoMIPS: 4499.90
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 128 MiB (128 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.5.40
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.3.0+cu121
[pip3] torchvision==0.18.0+cu121
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.5.40 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.3.0+cu121 pypi_0 pypi
[conda] torchvision 0.18.0+cu121 pypi_0 pypi | triaged | low | Critical |
2,659,120,502 | godot | Ellipsis sometimes not being rendered for RTL text | ### Tested versions
- Reproducible in 4.4.dev (76fa7b291455a8ba24c50005072ebdb58f8a5984)
### System information
Godot v4.4.dev (342e6b286) - Arch Linux #1 ZEN SMP PREEMPT_DYNAMIC Fri, 08 Nov 2024 17:57:58 +0000 on Wayland - X11 display driver, Multi-window, 2 monitors - Vulkan (Forward+) - dedicated Intel(R) Arc(tm) A770 Graphics (DG2) - Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz (16 threads)
### Issue description
### LTR
| | H Align Left | H Align Center | H Align Right |
|-----------------------|--------------|----------------|---------------|
| Overrun No Trimming |  |  |  |
| Overrun Trim Char |  |  |  |
| Overrun Trim Ellipsis |  |  |  |
### RTL
| | H Align Left | H Align Center | H Align Right |
|-----------------------|--------------|----------------|---------------|
| Overrun No Trimming |  |  |  |
| Overrun Trim Char |  |  |  |
| Overrun Trim Ellipsis |  |  |  |
https://github.com/user-attachments/assets/eb82ea90-999c-4967-b2f2-a8e6c61a21a3
### Steps to reproduce
Create a Label:
- Set some text
- Enable `clip_text`
- Set `text_overrun_behavior` to Ellipsis
- Set `text_direction` to Right-to-Left
- Resize the Label
### Minimal reproduction project (MRP)
N/A | bug,topic:gui | low | Minor |
2,659,124,494 | pytorch | No fake impl or Meta kernel for Communication Operator | ### 🐛 Describe the bug
There is no fake implementation or meta kernel for the Communication Operator. If I want to contribute to this feature, what can I do? Are there any examples that I can reference?
```python
import torch
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from datautils import MyTrainDataset
import torch.multiprocessing as mp
from torch.utils.data.distributed import DistributedSampler
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed import init_process_group, destroy_process_group
import os
import logging
def ddp_setup():
init_process_group(backend="nccl")
torch.cuda.set_device(int(os.environ["LOCAL_RANK"]))
class Trainer:
def __init__(
self,
model: torch.nn.Module,
train_data: DataLoader,
optimizer: torch.optim.Optimizer,
device:torch.device,
) -> None:
self.device = device
self.gpu_g = int(os.environ["RANK"])
self.gpu_id = int(os.environ["LOCAL_RANK"])
self.model = model.to(self.device)
self.train_data = train_data
self.optimizer = optimizer
self.epochs_run = 0
# self.model = DDP(self.model, device_ids=[self.gpu_id])
self.model = DDP(self.model)
def _run_batch(self, source, targets):
self.optimizer.zero_grad()
output = self.model(source)
loss = F.cross_entropy(output, targets)
loss.backward()
self.optimizer.step()
def _run_epoch(self, epoch):
b_sz = len(next(iter(self.train_data))[0])
print(f"[GPU{self.gpu_g}] Epoch {epoch} | Batchsize: {b_sz} | Steps: {len(self.train_data)}")
self.train_data.sampler.set_epoch(epoch)
for source, targets in self.train_data:
source = source.to(self.device)
targets = targets.to(self.device)
self._run_batch(source, targets)
def train(self, max_epochs: int):
for epoch in range(self.epochs_run, max_epochs):
self._run_epoch(epoch)
def load_train_objs(device:torch.device):
train_set = MyTrainDataset(2048) # load your dataset
model = torch.nn.Linear(20, 1,device=device) # load your model
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
return train_set, model, optimizer
def prepare_dataloader(dataset: Dataset, batch_size: int):
return DataLoader(
dataset,
batch_size=batch_size,
pin_memory=False,
shuffle=False,
sampler=DistributedSampler(dataset)
)
def main(device, total_epochs: int, batch_size: int):
ddp_setup()
dataset, model, optimizer = load_train_objs(device)
train_data = prepare_dataloader(dataset, batch_size)
trainer = Trainer(model, train_data, optimizer, device)
trainer.train(total_epochs)
destroy_process_group()
if __name__ == "__main__":
import argparse
torch._logging.set_logs(all=logging.DEBUG)
parser = argparse.ArgumentParser(description='simple distributed training job')
parser.add_argument('total_epochs', type=int, help='Total epochs to train the model')
parser.add_argument('--batch_size', default=32, type=int, help='Input batch size on each device (default: 32)')
args = parser.parse_args()
device = torch.device('meta')
main(device, args.total_epochs, args.batch_size)
#torchrun --nproc_per_node=1 --nnodes=2 --node_rank=0 --master_addr=192.167.1.1 --master_port=10002 ddp_meta.py 10
```
```
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/pc06/wyl/Torch-sim/ddp_tutorial/ddp_meta.py", line 89, in <module>
[rank1]: main(device, args.total_epochs, args.batch_size)
[rank1]: File "/home/pc06/wyl/Torch-sim/ddp_tutorial/ddp_meta.py", line 77, in main
[rank1]: trainer = Trainer(model, train_data, optimizer, device)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/pc06/wyl/Torch-sim/ddp_tutorial/ddp_meta.py", line 33, in __init__
[rank1]: self.model = DDP(self.model)
[rank1]: ^^^^^^^^^^^^^^^
[rank1]: File "/home/pc06/anaconda3/envs/torch25/lib/python3.12/site-packages/torch/nn/parallel/distributed.py", line 825, in __init__
[rank1]: _verify_param_shape_across_processes(self.process_group, parameters)
[rank1]: File "/home/pc06/anaconda3/envs/torch25/lib/python3.12/site-packages/torch/distributed/utils.py", line 288, in _verify_param_shape_across_processes
[rank1]: return dist._verify_params_across_processes(process_group, tensors, logger)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: NotImplementedError: c10d::allgather_: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add a fake impl. Please see the following for next steps: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
E1114 23:05:38.298000 195576 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 195676) of binary: /home/pc06/anaconda3/envs/torch25/bin/python
```
### Versions
PyTorch version: 2.5.0a0
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.78
cuDNN version: Probably one of the following:
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900K
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 4300.0000
CPU min MHz: 800.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] bert_pytorch==0.0.1a4
[pip3] bert_pytorch==0.0.1a4
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] torch==2.5.0a0+gitunknown
[conda] blas 1.0 mkl defaults
[conda] mkl 2023.1.0 h213fc3f_46344 defaults
[conda] mkl-include 2025.0.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py312h5eee18b_1 defaults
[conda] mkl-static 2025.0.0 pypi_0 pypi
[conda] mkl_fft 1.3.10 py312h5eee18b_0 defaults
[conda] mkl_random 1.2.7 py312h526ad5a_0 defaults
[conda] numpy 2.1.2 pypi_0 pypi
[conda] numpy-base 1.26.4 py312h0da6c21_0 defaults
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.5.0a0+gitunknown pypi_0 pypi
```[tasklist]
### Tasks
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,659,157,274 | PowerToys | Key remapping issues | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
It's full of conflicts with a lot of applications when remapping keys, and it also seems to me that after the computer comes back from sleep mode, sometimes it needs to be opened again for it to work. Anyway, it needs to be reviewed. I've gone back to using Sharpkeys. Everything works perfectly fine with it.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,659,234,692 | svelte | HTML comments break CSS :empty pseudo selector | ### Describe the bug
When using conditionals in Svelte the result leaves empty comments inside the node. Since comments are considered [childNodes](https://developer.mozilla.org/en-US/docs/Web/API/Node/childNodes) this breaks the CSS selector `:empty`.
We could use `:not(:has(*))` as a [workaround](https://jsfiddle.net/jwerre/pcqt215b/10/) but the css output looks like this:
```css
ul.svelte-vewu36:not(:has(/* (unused) **/)) { ... }
```
That said, I'd prefer not use use `:not(:has(*))` since it's very inefficient.
I've also tried setting `preserveComments=false` in my compiler options but it doesn't seems to do anything (in development anyway). Perhaps this has re-emerged as an issue: https://github.com/sveltejs/svelte/issues/4730 ?
### Reproduction
https://svelte.dev/playground/d1989f455149482ca0d9202e915b743f?version=5.1.16
https://jsfiddle.net/jwerre/pcqt215b/10/
### System Info
```shell
System:
OS: macOS 15.0.1
CPU: (10) arm64 Apple M1 Max
Memory: 1.21 GB / 64.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.18.0 - /opt/homebrew/bin/node
Yarn: 1.22.22 - ~/.npm-global/bin/yarn
npm: 10.8.2 - /opt/homebrew/bin/npm
Browsers:
Chrome: 130.0.6723.117
Edge: 117.0.2045.40
Safari: 18.0.1
```
### Severity
annoyance | css,needs discussion | low | Critical |
2,659,270,978 | godot | `IsInstanceValid` called before the end of the frame returns true on an object that called `QueueFree` | ### Tested versions
tested on v4.3.stable.mono.official
### System information
Windows 10 - Godot v4.3.stable.mono.official
### Issue description
If we call `node.QueueFree()`, `IsInstanceValid(node)` should return `false` even before the end of the frame.
I can't think of a reason why would anyone need to have `IsInstanceValid(node)` returning true after a `node.QueueFree()` call.
### Steps to reproduce
1. Create a node,
2. Print out `IsInstanceValid`
3. QueueFree the node
4. Print out `IsInstanceValid`
It prints out
```
true
true
```
and I think it must print out
```
true
false
```
### Minimal reproduction project (MRP)
```
ColorRect n = new ColorRect();
AddChild(n);
GD.Print(IsInstanceValid(n));
n.QueueFree();
GD.Print(IsInstanceValid(n));
``` | discussion,topic:core,breaks compat | low | Minor |
2,659,310,242 | PowerToys | Unable to restore settings | ### Microsoft PowerToys version
0.86
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
After reinstalling Windows I copied the backup file back to Documents\PowerToys\Backup but it does not see it. Says there is nothing to restore.
### ✔️ Expected Behavior
Restore all my hard work :(
### ❌ Actual Behavior
Says nothing to restore.
### Other Software
nope | Issue-Bug,Needs-Triage | low | Minor |
2,659,331,617 | pytorch | 'lobcpg' gives an output on CPU but fails on GPU('mps') | ### 🐛 Describe the bug
torch.lobcpg() produces an output with tensors when run on cpu however gives a 'NotImplementedError' when run on GPU 'mps'
CPU version code:
```python
import torch
A = torch.tensor([[0.0100, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0100, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0100, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0100, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0100]])
for i in range(1000):
print(i)
(eigvals, eigvecs) = torch.lobpcg(A)
print(eigvals), print(eigvecs)
# prints 0 - 999 , tensor([0.0100])
# tensor([[ 0.5611],
# [-0.7640],
# [-0.3041],
# [-0.0861],
```
GPU version code:
```
import torch
A = torch.tensor([[0.0100, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0100, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0100, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0100, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0100]]).to('mps')
for i in range(1000):
print(i)
(eigvals, eigvecs) = torch.lobpcg(A)
print(eigvals), print(eigvecs)
```
output:
'''
NotImplementedError: The operator 'aten::linalg_cholesky_ex.L' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
'''
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Aug 9 2024, 14:24:13) [Clang 16.0.0 (clang-1600.0.26.3)] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[conda] numpy 2.0.2 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch 2.6.0.dev20241109 py3.9_0 pytorch-nightly
[conda] torchaudio 2.5.0.dev20241109 py39_cpu pytorch-nightly
[conda] torchvision 0.20.0.dev20241109 py39_cpu pytorch-nightly
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | triaged,enhancement,module: linear algebra,module: mps | low | Critical |
2,659,358,396 | langchain | Upgraded LangChain to v0.3.3; prompts folder missing. Need path update for PromptTemplate. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
const { PromptTemplate } = require('langchain/prompts');
### Error Message and Stack Trace (if applicable)
"Package subpath './prompts' is not defined by \"exports\" in /opt/nodejs/node_modules/langchain/package.json"
### Description
We're using LangChain version 0.0.209 in our AWS-deployed chatbot, but it's now showing a vulnerability issue. The recommended fix is to upgrade to version 0.3.3, which we did using npm install langchain@0.3.3. However, we noticed that the prompts folder is missing in LangChain version 0.3.3.
Here’s our code:
const { PromptTemplate } = require('langchain/prompts');
Could someone please help with updating the path to access PromptTemplate in version 0.3.3?
### System Info
Not applicable | Ɑ: core | low | Critical |
2,659,361,400 | TypeScript | Computing buildInfoTime even when !isIncremental | > @sheetalkamat @johnnyreilly I'm out of my depth, but I think there's maybe a bug introduced here.
>
>
>
> I'm getting `TypeError: Cannot read properties of undefined (reading 'includes')` in **fork-ts-checker-webpack-plugin** when my project is NOT `incremental: true`.
>
>
>
> Prior to this revision `buildInfoTime` was not computed if `buildInfoPath` was `undefined` but after this change, even though `buildInfoPath` is undefined, it is still attempting to compute `ts_getModifiedTime(host, buildInfoPath)`
>
>
>
> Eventually, this calls to `isArtifact(undefined)` in fork-ts-checker-webpack-plugin\lib\typescript\worker\lib\system.js
>
>
>
> [function isArtifact(path) {
>
> return ((artifacts.dirs.some((dir) => path.includes(dir)) ||
>
> artifacts.files.some((file) => path === file)) &&
>
> artifacts.extensions.some((extension) => path.endsWith(extension)));
>
> }](https://github.com/TypeStrong/fork-ts-checker-webpack-plugin/blob/0fab463b21c6edc4d94834568a3f440241d57887/src/typescript/worker/lib/system.ts#L284C1-L290C2)
>
>
>
> Perhaps this needs to be handled in fork-ts-checker-webpack-plugin? But it seems to me that perhaps `buildInfoTime` could be skipped when `buildInfoPath` is unavailable.
_Originally posted by @JasonKleban in [dca9182](https://github.com/microsoft/TypeScript/commit/dca9182ca8f059ecd3840b386f6c5c70a0c2b54a#r149088955)_ | Help Wanted,Possible Improvement | low | Critical |
2,659,383,634 | langchain | BaseTool.run does not pass kwargs to the actual function call | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.tools import StructuredTool
def print_input_and_kwargs(input: str, **kwargs) -> None:
print(f"Input: {input}")
print(f"Kwargs: {kwargs}")
tool = StructuredTool.from_function(
func=print_input_and_kwargs,
name="print_input_and_kwargs",
description="Print the input and kwargs.",
)
tool.run(
"Hello, world!", kwarg1="First keyword argument.", kwarg2="Second keyword argument."
)
```
Actual
```python
# Input: Hello, world!
# Kwargs: {}
```
Expected
```python
# Input: Hello, world!
# Kwargs: {"kwarg1": "First keyword argument.", "kwarg2": "Second keyword argument."}
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using langchain to create custom tools.
The tools are using `kwargs` to pass some data.
`BaseTool.run` accepts `**kwargs` as an argument and the docs clearly state:
`kwargs: Additional arguments to pass to the tool`
I expect that if I pass any kwargs to tool.run they will be available for me to use inside the function, which they are NOT.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_anthropic: 0.1.19
> langchain_google_community: 1.0.6
> langchain_google_genai: 1.0.7
> langchain_groq: 0.1.6
> langchain_mistralai: 0.1.10
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| 🤖:bug,investigate,Ɑ: core | low | Critical |
2,659,450,362 | tauri | [feat] Add Set Cookie API to Webview | ### Describe the problem
I need an API that can set cookies like the API made by electron, or like the chrome extension API
### Describe the solution you'd like
```rust
let webview = tauri::webview::WebviewBuilder::new(
"tauri_main",
WebviewUrl::External("mywebsite.com".parse().unwrap())
);
let res = data;
webview.set_cookie({
name: res.name,
value: res.value,
domain: res.domain,
path: res.path,
expirationDate: res.expirationDate,
httpOnly: res.httpOnly,
sameSite: res.sameSite,
secure: res.secure,
storeId: e.storeId: res.storeId,
url: res.url
});
```
### Alternatives considered
_No response_
### Additional context
[https://www.electronjs.org/docs/latest/api/cookies#cookiessetdetails](https://www.electronjs.org/docs/latest/api/cookies#cookiessetdetails)
[https://developer.chrome.com/docs/extensions/reference/api/cookies?hl=id#method-set](https://developer.chrome.com/docs/extensions/reference/api/cookies?hl=id#method-set) | type: feature request | low | Minor |
2,659,485,616 | flutter | [go_router] Issue with recreating parent route when switches to/from the 1st tab | ### Steps to reproduce
1. Launch the sample
2. Open Flutter Inspector Tree View
3. Click on `Tab page 1` label (it will open another screen)
4. Click on the `SubPage 1` label
5. Everything seems ok. You have the following tree
- TabsPage
- SamplePage
- AnotherTabsPage
<img width="220" alt="Screenshot 2024-11-14 at 20 50 27" src="https://github.com/user-attachments/assets/72f51529-33aa-4d5b-94b8-2965f09d28bf">
7. Click on `Item 1` on the bottom bar (change the index) and you will see that the first screen `TabsPage` disappeared and you have the following stack:
- SamplePage
- AnotherTabsPage
<img width="220" alt="Screenshot 2024-11-14 at 20 50 38" src="https://github.com/user-attachments/assets/a7ef53de-e42f-4517-b269-ee9ee370dd30">
9. Click on `Item 0` and the navigation stack is restored and again you will see:
- TabsPage
- SamplePage
- AnotherTabsPage
### Expected results
The parent screen `SamplePage` should not be rebuilt and `TabsPage` should not disappear depending on the selected tab in the inner navigation bar.
### Actual results
The parent screen is recreated. The first screen in the stack disappears when you switch to any index except 0. Then it appears again when you go back to 0 index in the inner tab bar.
In my real application it triggers logic in the initState
### Code sample
<details open><summary>Code sample</summary>
I created a sample repository: https://github.com/w3ggy/go_router_issue
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/537647c4-b84e-4bcf-afab-525167625a49
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.0.1 24A348 darwin-arm64, locale
en-RU)
• Flutter version 3.24.3 on channel stable at /Users/mac/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (9 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/mac/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code 2.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (6 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 •
android-arm64 • Android 15 (API 35) (emulator)
• iPhone 16 (mobile) • 2C1D3EBC-5674-437B-8257-0747CC02175C •
ios • com.apple.CoreSimulator.SimRuntime.iOS-18-0 (simulator)
• macOS (desktop) • macos •
darwin-arm64 • macOS 15.0.1 24A348 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad •
darwin • macOS 15.0.1 24A348 darwin-arm64
• Chrome (web) • chrome •
web-javascript • Google Chrome 130.0.6723.117
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| package,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.27 | low | Major |
2,659,503,572 | langchain | text_embedding/azureopenai: Link is broken on azureopenai text embeddings page. | ### URL
https://python.langchain.com/docs/integrations/text_embedding/azureopenai/azureopenai/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
After clicking the "AzureOpenAI" link in the provider box it took me to https://python.langchain.com/docs/integrations/text_embedding/azureopenai/azureopenai/, which resulted in no page found. The link is shown in the screenshot below.

After following the https://python.langchain.com/docs/integrations/text_embedding/azureopenai/azureopenai/ link it took me to a dead page. Not sure if the doc was moved or something was renamed.

### Idea or request for content:
Can this please be looked into not sure if its meant to be an actual link going somewhere. Could be that someone forgot to add the link for the relevant doc. | 🤖:docs | low | Critical |
2,659,584,170 | tensorflow | tf.range still miss some dtypes support | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
v2.18.0-rc2-4-g6550e4bd802 2.18.0
### Custom code
Yes
### OS platform and distribution
Google Colab
### Mobile device
No
### Python version
Google Colab default
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Same issue as in https://github.com/tensorflow/tensorflow/issues/72365 but now with unsigned dtypes
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
tf.range(10, delta=1, dtype='uint8')
```
### Relevant log output
```shell
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-1-7b6ccd0e0a16> in <cell line: 3>()
1 import tensorflow as tf
2
----> 3 tf.range(10, delta=1, dtype='uint8')
1 frames
/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
6000 def raise_from_not_ok_status(e, name) -> NoReturn:
6001 e.message += (" name: " + str(name if name is not None else ""))
-> 6002 raise core._status_to_exception(e) from None # pylint: disable=protected-access
6003
6004
InvalidArgumentError: Value for attr 'Tidx' of uint8 is not in the list of allowed values: bfloat16, half, float, double, int8, int16, int32, int64, uint16, uint32
; NodeDef: {{node Range}}; Op<name=Range; signature=start:Tidx, limit:Tidx, delta:Tidx -> output:Tidx; attr=Tidx:type,default=DT_INT32,allowed=[DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_INT8, DT_INT16, DT_INT32, DT_INT64, DT_UINT16, DT_UINT32]> [Op:Range] name:
```
| stat:awaiting tensorflower,type:bug,comp:ops,TF 2.18 | medium | Critical |
2,659,593,225 | PowerToys | FancyZones specify zone size directly | ### Description of the new feature / enhancement
I have a single very large 4K screen. I would like to use FancyZones to configure regions of my screen to record with OBS Studio for demos etc. Ideally, I could specify this directly without having to use the drag bars which are not accurate for trying to get an exact 1920x1080 region for example.
I looked at custom-layouts.json but was very surprised by the rows-percentage and columns-percentage approach of specifying things there. I have no idea how to translate that back to specific sizes like 1080p.
### Scenario when this would be used?
Recording demos.
### Supporting information
I think the typical way for this scenario is multi monitor, but I only have 1 monitor and no room on my desk for another. | Needs-Triage | low | Minor |
2,659,599,837 | go | cmd/go: TestScript/mod_gonoproxy failures | ```
#!watchflakes
default <- pkg == "cmd/go" && test == "TestScript/mod_gonoproxy"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731281848736199873)):
=== RUN TestScript/mod_gonoproxy
=== PAUSE TestScript/mod_gonoproxy
=== CONT TestScript/mod_gonoproxy
script_test.go:139: 2024-11-14T16:10:31Z
script_test.go:141: $WORK=/Users/swarming/.swarming/w/ir/x/t/cmd-go-test-2133929882/tmpdir1993393696/mod_gonoproxy112035207
go proxy: no archive example.com/cmd/a v1.0.0: file does not exist
script_test.go:163:
PATH=/Users/swarming/.swarming/w/ir/x/t/cmd-go-test-2133929882/tmpdir1993393696/testbin:/Users/swarming/.swarming/w/ir/x/w/goroot/bin:/Users/swarming/.swarming/w/ir/x/w/goroot/bin:/Users/swarming/.swarming/w/ir/x/w/goroot/bin:/Users/swarming/.swarming/w/ir/cache/tools/bin:/Users/swarming/.swarming/w/ir/bbagent_utility_packages:/Users/swarming/.swarming/w/ir/bbagent_utility_packages/bin:/Users/swarming/.swarming/w/ir/cipd_bin_packages:/Users/swarming/.swarming/w/ir/cipd_bin_packages/bin:/Users/swarming/.swarming/w/ir/cipd_bin_packages/cpython3:/Users/swarming/.swarming/w/ir/cipd_bin_packages/cpython3/bin:/Users/swarming/.swarming/w/ir/cache/cipd_client:/Users/swarming/.swarming/w/ir/cache/cipd_client/bin:/Users/swarming/.swarming/cipd_cache/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
HOME=/no-home
CCACHE_DISABLE=1
...
> env GOPRIVATE='*/x'
> go get golang.org/x/text
[stderr]
go: downloading golang.org/x/text v0.20.0
go: upgraded golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c => v0.20.0
> go list -m all
[stderr]
go: unrecognized import path "golang.org/x/sync": reading https://golang.org/x/sync?go-get=1: 500 Internal Server Error
script_test.go:163: FAIL: testdata/script/mod_gonoproxy.txt:51: go list -m all: exit status 1
--- FAIL: TestScript/mod_gonoproxy (83.90s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,659,601,336 | svelte | The problem with `enumerated` and `Booleanish` | ### Describe the bug
There are several attributes whose values can be enumerated.
However, if the values are string representations of booleans, then the value type becomes `Booleanish`.
https://github.com/sveltejs/svelte/blob/320ebd24d8857570b0c180752765fb1580590367/packages/svelte/elements.d.ts#L730
This can lead to similar errors:
### Reproduction
https://svelte.dev/playground/98dbee7e667d49028d0d7a662fd58f4e?version=5.1.16
### Logs
_No response_
### System Info
```shell
-
```
### Severity
annoyance | needs discussion | low | Critical |
2,659,606,650 | PowerToys | Shortcut not working | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
I just try to do the shortcut alt+ space on my personal laptop but its not working. But on my office laptop it work fine.
### ✔️ Expected Behavior
Hi exepect the shortcut to open a search bar were we can run command search for apps etc.
### ❌ Actual Behavior
Actualy nothing happen when i do this or it open a dialog box for the window I'm in to close the window
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,659,609,739 | PowerToys | Screen Ruler stay active while resizing windows | ### Description of the new feature / enhancement
I'd like the option to be able to leave the screen ruler active while resizing a window. This would be especially helpful for trying to get the content of web browsers set to standard sizes like 1080P. As it is I can see how close I am, deactivate screen ruler, adjust, and measure again. This is frustratingly hard to use when you are off by a pixel or two.
### Scenario when this would be used?
For demo recordings.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,659,612,952 | PowerToys | Screen ruler select window mode | ### Description of the new feature / enhancement
Similar to the snipping tool it would be useful to be able to have a mode to measure the overall size of an application window rather than using the bounding box where you can be off by pixels.
### Scenario when this would be used?
Demo prep, design work, etc.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,659,623,355 | deno | LSP does not provide import completions for node modules packages with multiple export paths | Deno Version : 2.0.6
LSP does not complete imports for non-index routes.



| needs investigation,lsp | low | Minor |
2,659,641,561 | yt-dlp | [GloboArticle] AttributeError: 'NoneType' object has no attribute 'strip' | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Brazil
### Provide a description that is worded well enough to be understood
yt-dlp is failed downloading videos from https://g1.globo.com
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--verbose', 'https://g1.globo.com/politica/noticia/2024/11/14/moraes-cita-gabinete-do-odio-e-diz-que-explosoes-no-centro-de-brasilia-nao-sao-um-fato-isolado-do-contexto.ghtml']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.11.04 from yt-dlp/yt-dlp [197d0b03b] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.1-essentials_build-www.gyan.dev (setts), ffprobe 7.1-essentials_build-www.gyan.dev, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[GloboArticle] Extracting URL: https://g1.globo.com/politica/noticia/2024/11/14/moraes-cita-gabinete-do-odio-e-diz-que-explosoes-no-centro-de-brasilia-nao-sao-um-fato-isolado-do-contexto.ghtml
[GloboArticle] moraes-cita-gabinete-do-odio-e-diz-que-explosoes-no-centro-de-brasilia-nao-sao-um-fato-isolado-do-contexto: Downloading webpage
WARNING: [GloboArticle] unable to extract OpenGraph title; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: 'NoneType' object has no attribute 'strip'
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1625, in wrapper
File "yt_dlp\YoutubeDL.py", line 1760, in __extract_info
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\globo.py", line 241, in _real_extract
AttributeError: 'NoneType' object has no attribute 'strip'
```
| site-bug,patch-available | low | Critical |
2,659,662,518 | tauri | [bug] CSP ignored when running `cargo tauri dev` | ### Describe the bug
The CSP policies specified in the tauri config are not applied when running with `cargo tauri dev`. Running it with `cargo run` on the other hand does correctly apply this.
I checked this by setting `frame-src` and using and `iframe`.
### Reproduction
- Set CSP in config (e.g. `frame-src 'none'`)
- Run with `cargo tauri dev`
### Expected behavior
CSP is applied (e.g. iframe loading is prevented)
### Full `tauri info` output
```text
[✔] Environment
- OS: Fedora 41.0.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.46.3
✔ rsvg2: 2.59.1
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (environment override by RUSTUP_TOOLCHAIN)
- node: 22.11.0
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.8
- tauri-cli 🦀: 2.1.0
[-] Plugins
- tauri-plugin-log 🦀: 2.0.2
[-] App
- build-type: bundle
- CSP: frame-src epub:
- frontendDist: ../src
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,659,683,828 | rust | candidate selection for normalization and trait goals disagree | ```rust
#![feature(discriminant_kind)]
use std::marker::DiscriminantKind;
fn trait_bound<T: DiscriminantKind>() {}
fn normalize<T: DiscriminantKind<Discriminant = u8>>() {}
fn foo<'a, 'b>()
where
&'b (): DiscriminantKind<Discriminant = u8>,
{
trait_bound::<&'a ()>();
}
fn bar<'a, 'b>()
where
&'b (): DiscriminantKind<Discriminant = u8>,
{
normalize::<&'a ()>();
}
```
`foo` compiles, `bar` does not:
```
error: lifetime may not live long enough
--> src/lib.rs:18:5
|
14 | fn bar<'a, 'b>()
| -- -- lifetime `'b` defined here
| |
| lifetime `'a` defined here
...
18 | normalize::<&'a ()>();
| ^^^^^^^^^^^^^^^^^^^ requires that `'b` must outlive `'a`
|
= help: consider adding the following bound: `'b: 'a`
error: lifetime may not live long enough
--> src/lib.rs:18:5
|
14 | fn bar<'a, 'b>()
| -- -- lifetime `'b` defined here
| |
| lifetime `'a` defined here
...
18 | normalize::<&'a ()>();
| ^^^^^^^^^^^^^^^^^^^ requires that `'a` must outlive `'b`
|
= help: consider adding the following bound: `'a: 'b`
help: `'b` and `'a` must be the same: replace one with the other
```
Candidate selection for the trait goal prefers the trivial builtin impl. Normalization instead prefers the where-bound. This is inconsistent and means that whether we use the associated items impacts whether a trait bound holds.
It impacts all trivial builtin traits with associated types, I don't think this effects stable rn as either the trait is unstable or the builtin impls only exist for unnameable types. Nominating for t-types vibeck
| A-type-system,P-low,I-types-nominated,T-types | low | Critical |
2,659,686,669 | godot | AMD iGPU gets prefered over Intel dGPU (Linux `detect_prime`) | ### Tested versions
Found using `Godot_v4.3-stable_linux.x86_64` but I still see the issue in master
### System information
Debian trixie (6.11.4-1), Godot 4.3
### Issue description
On my system with 2 GPUs (Intel dGPU and AMD iGPU), Godot will prefer the AMD GPU over the Intel GPU based on the priority from the `vendor_map` here [1]. My iGPU is not connected to anything, so Godot fails to create a surface to render to and crashes.
There's currently no way to override this behaviour.
[1]: https://github.com/godotengine/godot/blob/master/platform/linuxbsd/wayland/detect_prime_egl.h#L68
### Steps to reproduce
- HW: Intel dGPU (Arc A750), AMD iGPU (AMD Ryzen 7 5700G with Radeon Graphics)
- Launch a project with `-v` argument
- Observe the following error
```
Found renderers:
Renderer 0: Mesa Intel(R) Arc(tm) A750 Graphics (DG2) with priority: 20
Renderer 1: AMD Radeon Graphics (radeonsi, renoir, LLVM 19.1.3, DRM 3.59, 6.11.4-amd64) with priority: 30
Using renderer: AMD Radeon Graphics (radeonsi, renoir, LLVM 19.1.3, DRM 3.59, 6.11.4-amd64)
Found discrete GPU, setting DRI_PRIME=1 to use it.
Note: Set DRI_PRIME=0 in the environment to disable Godot from using the discrete GPU.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
ERROR: Condition "ctxErrorOccurred || !gl_display.context->glx_context" is true. Returning: ERR_UNCONFIGURED
at: _create_context (platform/linuxbsd/x11/gl_manager_x11.cpp:183)
WARNING: Your video card drivers seem not to support the required OpenGL version, switching to OpenGLES.
at: DisplayServerX11 (platform/linuxbsd/x11/display_server_x11.cpp:6232)
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
Loaded EGL 1.5
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
ERROR: Can't create an EGL context. Error code: 12291
at: _gldisplay_create_context (drivers/egl/egl_manager.cpp:196)
ERROR: Method/function failed. Returning: -1
at: _get_gldisplay_id (drivers/egl/egl_manager.cpp:104)
ERROR: Condition "gldisplay_id < 0" is true. Returning: ERR_CANT_CREATE
at: display_get_native_visual_id (drivers/egl/egl_manager.cpp:212)
ERROR: Condition "number_of_visuals <= 0" is true. Returning: INVALID_WINDOW_ID
at: _create_window (platform/linuxbsd/x11/display_server_x11.cpp:5475)
Output 7f0f280035d0 done.
Output 7f0f280035d0 scale 1
Output 7f0f280037e0 scale 1
Output 7f0f280037e0 done.
Loading cursor theme "Adwaita" size 24.
Failed loading cursor: crossed_circle
libspeechd.so.2: cannot open shared object file: No such file or directory
Text-to-Speech: Cannot load Speech Dispatcher library!
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
Loaded EGL 1.5
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
ERROR: Can't create an EGL context. Error code: 12291
at: _gldisplay_create_context (drivers/egl/egl_manager.cpp:196)
ERROR: Method/function failed. Returning: -1
at: _get_gldisplay_id (drivers/egl/egl_manager.cpp:104)
WARNING: Your video card drivers seem not to support the required OpenGL version, switching to OpenGLES.
at: DisplayServerWayland (platform/linuxbsd/wayland/display_server_wayland.cpp:1430)
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
Loaded EGL 1.5
Showing window.
Window has no output associated, returning buffer scale of 1.
libdecor frame on configure rect [P: (0, 0), S: (1280, 720)]
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
radeonsi: can't create eop_bug_scratch
radeonsi: Failed to create a context.
ERROR: Can't create an EGL context. Error code: 12291
at: _gldisplay_create_context (drivers/egl/egl_manager.cpp:196)
ERROR: Method/function failed. Returning: -1
at: _get_gldisplay_id (drivers/egl/egl_manager.cpp:104)
ERROR: Condition "gldisplay_id < 0" is true. Returning: ERR_CANT_CREATE
at: window_create (drivers/egl/egl_manager.cpp:227)
ERROR: Can't show a GLES3 window.
at: _show_window (platform/linuxbsd/wayland/display_server_wayland.cpp:178)
PortalDesktop: DBus 1.14.10 detected.
ScreenSaver: DBus 1.14.10 detected.
Using "default" pen tablet driver...
ERROR: Error initializing GLAD.
at: RasterizerGLES3 (drivers/gles3/rasterizer_gles3.cpp:270)
================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine v4.3.stable.official (77dcf97d82cbfe4e4615475fa52ca03da645dbd8)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /lib/x86_64-linux-gnu/libc.so.6(+0x3fd20) [0x7f101ac95d20] (??:0)
[2] ../Godot_v4.3-stable_linux.x86_64() [0x384d6db] (??:0)
[3] ../Godot_v4.3-stable_linux.x86_64() [0x118bd95] (??:0)
[4] ../Godot_v4.3-stable_linux.x86_64() [0x3888855] (??:0)
[5] ../Godot_v4.3-stable_linux.x86_64() [0x47bc47b] (??:0)
[6] ../Godot_v4.3-stable_linux.x86_64() [0x4718aa2] (??:0)
[7] ../Godot_v4.3-stable_linux.x86_64() [0x4200b5] (??:0)
[8] /lib/x86_64-linux-gnu/libc.so.6(+0x29d68) [0x7f101ac7fd68] (??:0)
[9] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x7f101ac7fe25] (??:0)
[10] ../Godot_v4.3-stable_linux.x86_64() [0x43d44a] (??:0)
-- END OF BACKTRACE --
================================================================
```
### Minimal reproduction project (MRP)
I've only tested with a single project but I'm sure this would occur on any project. | bug,platform:linuxbsd,topic:rendering | low | Critical |
2,659,687,640 | PowerToys | after i changed my keybinds i wasnt able to change them back so now i cant use the keys u,h,j,k and i need help to fix it | ### Microsoft PowerToys version
v0.86.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
after i changed my keybinds i wasnt able to change them back so now i cant use the keys u,h,j,k and i need help to fix it
### ✔️ Expected Behavior
for it to be reverted after i turned it off but it didnt
### ❌ Actual Behavior
i can no longer use u,h,j,k on my keyboard permanently
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,659,797,282 | godot | CharacterBody3D with `floor_constant_speed` still slightly slower when going uphill | ### Tested versions
Reproducible in:
- v4.3.stable.official [77dcf97d8]
- v4.4.dev4.official [36e6207bb]
### System information
Godot v4.3.stable - Debian GNU/Linux trixie/sid trixie - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 7600 (RADV NAVI33) - AMD Ryzen 5 7600 6-Core Processor (12 Threads)
### Issue description
`CharacterBody3D`s which have `floor_constant_speed` switched on don't quite maintain their ground speed when going uphill.
This causes bodies which are (in theory) moving in a circle to gradually slide down the hill. The magnitude of the discrepancy seems to vary depending on the collision shape, thus characters which should follow the same path also gradually separate.

This is the repro scene, with four constant-speed `CharacterBody3D`s with different collision shapes. The numbers in the top-left are the bodys' speeds (difference in position frame to frame). The `SeparationRay3D` with `slide_on_slope=false` is the only one which loops back to the start position (but follows a different path because it's not affected by the slope).
https://github.com/user-attachments/assets/840cfff4-84f7-4a76-bbd7-f651bc4b120b
You can see that the speeds are kept at around 4.0 when going downhill, but uphill they're consistently 3.99blah, and even reach 3.97blah.
### Steps to reproduce
1. Have a `CharacterBody3D` with `floor_constant_velocity=true`
2. Watch it move in a circle on a slope
3. :eyes:
### Minimal reproduction project (MRP)
Self-contained scene:
[floor_constant_speed_isnt.zip](https://github.com/user-attachments/files/17756566/floor_constant_speed_isnt.zip)
| bug,topic:physics,needs testing,topic:3d | low | Major |
2,659,801,175 | react-native | LayoutAnimation.configureNext() broken in 0.76 | ### Description
Initially opened as an Expo bug but seems like a react native 0.76 bug.
Link to initial issue for context: https://github.com/expo/expo/issues/32868
When showing or hiding content, LayoutAnimation.configureNext(LayoutAnimation.Presets.easeInEaseOut); usually automatically applies the "next" animation (Like opening and closing an accordion).
With react-native 0.76 and the new Architecture, this does not work anymore.
### Steps to reproduce
1- Start a new react native project (with or without expo)
2- Create an Accordion or Collapsible component like https://snack.expo.dev/@baltagih/b638ce
3- LayoutAnimation.configureNext() does not apply the animations
### React Native Version
0.76.1
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
In expo managed project so some info might not display, like newArchEnabled.
info Fetching system and libraries information...
System:
OS: macOS 15.1
CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Memory: 3.46 GB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.18.0
path: ~/.nvm/versions/node/v18.18.0/bin/node
Yarn:
version: 1.22.15
path: ~/.yarn/bin/yarn
npm:
version: 9.8.1
path: ~/.nvm/versions/node/v18.18.0/bin/npm
Watchman:
version: 2024.03.25.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/mafiamalaria/.rvm/gems/ruby-3.0.5/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK:
API Levels:
- "26"
- "27"
- "28"
- "29"
- "30"
- "31"
Build Tools:
- 27.0.3
- 28.0.3
- 29.0.2
- 29.0.3
- 30.0.2
- 30.0.3
- 31.0.0
- 35.0.0
System Images:
- android-30 | Google APIs Intel x86 Atom
- android-30 | Google Play Intel x86 Atom
- android-31 | Google APIs Intel x86 Atom_64
- android-31 | Google Play Intel x86 Atom_64
- android-32 | Google APIs Intel x86 Atom_64
- android-Tiramisu | Google APIs Intel x86 Atom_64
Android NDK: Not Found
IDEs:
Android Studio: EAP AI-242.21829.142.2422.12329062 AI-242.21829.142.2422.12329062
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 1.8.0_292
path: /usr/bin/javac
Ruby:
version: 3.0.5
path: /Users/mafiamalaria/.rvm/rubies/ruby-3.0.5/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.2
wanted: latest
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: 0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
No stacktrace, animation never happens, no error thrown.
```
### Reproducer
https://snack.expo.dev/@baltagih/b638ce
### Screenshots and Videos
_No response_ | Issue: Author Provided Repro,API: LayoutAnimation,Type: New Architecture | low | Critical |
2,659,825,165 | flutter | [ios][platform_view][admob] Recycle admob banners | ### Use case
A while back we compared the performance between Flutter and Native ad banners in a scrollable list. On native, it is very easy to recycle the ad banners like this:
```
func viewDidLoad {
cachedBanners = (0..<10).map { _ in
let banner = GADBannerView()
banner.loadRequest()
return banner
}
}
func cellForRowAtIndexPath() {
let banner = cachedBanner[indexPath.row];
...
}
```
Side note: in this native implementation, I load 10 requests upfront in `viewDidLoad`, which isn't ideal. It'd be great if we can improve it by loading it lazily.
### Proposal
Recycle ad banners in Flutter.
Potential work can be either (pending investigation):
1. No change (maybe we already support it?). Though we will need to document how to do it.
2. Change to AdMob plugin
3. Change to Flutter's platform view.
| platform-ios,a: platform-views,P2,team-ios,triaged-ios | low | Major |
2,659,894,443 | go | net/http: TestServerKeepAliveAfterWriteError/h1 failures | ```
#!watchflakes
default <- pkg == "net/http" && test == "TestServerKeepAliveAfterWriteError/h1"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731270664442725713)):
=== RUN TestServerKeepAliveAfterWriteError/h1
serve_test.go:4658: saw 2 unique client addresses; want 3
--- FAIL: TestServerKeepAliveAfterWriteError/h1 (1.51s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,659,895,139 | ollama | Only CPU is used after rebooting | [I found someone wrote a thread describing only cpu is used after rebooting in windows ](https://github.com/ollama/ollama/issues/4984#issue-2347076913)
I also had similar problems even in Ubuntu OS.
I used the latest version(0.4.1). I guess this bug comes from that the ollama service is started faster than the init. of GPUs. So I make an **ad-hoc** solution. Instead of service, I just make a script delaying start serve(ollama serve).
```bash
# ollama_run
echo "Delayed Ollama Runner Start, it delays 10 sec."
sleep 10
ollama serve
```
Then I make this called from *Ubuntu Startup Application Preferences*. I think its delaying may not be needed as it is called after GPUs initialization is finished anyway.

**WARNING**: After starting ollama by calling directly `ollama serve`, model storage directory is changed to `~/.ollama/models`.(I don't know why?) So previously dowonloaded model is not loaded. In that case, you can copy or move whole models folder to '[home folder]/.ollama` from `/usr/share/ollama/.ollama`.
_Originally posted by @3DAlgoLab in https://github.com/ollama/ollama/issues/4984#issuecomment-2477251430_
| linux,nvidia | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.