id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,511,299,582 | godot | Memory leak in test runner when using [Editor] test tag | ### Tested versions
4.4 custom build, triggered by online checks
### System information
N/A
### Issue description
When running the PR checks for the unit tests in https://github.com/godotengine/godot/pull/96640, `Linux / Editor with clang sanitizers (target=editor, tests=yes, dev_build=yes, use_asan=yes, use_ubsan=yes, use_llvm=yes, linker=lld` fails with a memory leak, while most other checks succeed:
```
...
WARNING: ObjectDB instances leaked at exit (run with --verbose for details).
at: cleanup (core/object/object.cpp:2327)
=================================================================
==5545==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 464 byte(s) in 1 object(s) allocated from:
#0 0xc17cf9d in malloc (/home/runner/work/godot/godot/bin/godot.linuxbsd.editor.dev.x86_64.llvm.san+0xc17cf9d)
#1 0x2d44c34e in Memory::alloc_static(unsigned long, bool) /home/runner/work/godot/godot/core/os/memory.cpp:108:14
#2 0x2d44c1a4 in operator new(unsigned long, char const*) /home/runner/work/godot/godot/core/os/memory.cpp:41:9
#3 0x189fefd3 in EditorPaths::create() /home/runner/work/godot/godot/editor/editor_paths.cpp:102:2
#4 0xe94ec48 in GodotTestCaseListener::test_case_start(doctest::TestCaseData const&) /home/runner/work/godot/godot/tests/test_main.cpp:318:5
#5 0xe986525 in doctest::Context::run() /home/runner/work/godot/godot/./thirdparty/doctest/doctest.h:6982:13
#6 0xddafea4 in test_main(int, char**) /home/runner/work/godot/godot/tests/test_main.cpp:244:22
#7 0xc5ee079 in Main::test_entrypoint(int, char**, bool&) /home/runner/work/godot/godot/main/main.cpp:870:17
#8 0xc1af7cc in main /home/runner/work/godot/godot/platform/linuxbsd/godot_linuxbsd.cpp:68:2
#9 0x7fab04fed082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
...
```
In `test_main.cpp`, an `EditorPaths` and `EditorSettings` are created:
https://github.com/godotengine/godot/blob/5675c76461e197d3929a1142cfb84ab1a76ac9dd/tests/test_main.cpp#L315-L319
But only the `EditorSettings` is destroyed:
https://github.com/godotengine/godot/blob/5675c76461e197d3929a1142cfb84ab1a76ac9dd/tests/test_main.cpp#L347-L349
Prior to my PR, it seems like the only place this `[Editor]` tag was used was here. I'm not sure why checks weren't failing for the GDScript test, though.
https://github.com/godotengine/godot/blob/5675c76461e197d3929a1142cfb84ab1a76ac9dd/modules/gdscript/tests/test_completion.h#L223
Adding an `EditorPaths::free();` should fix this.
### Steps to reproduce
Write a unit test that uses the [Editor] tag, then commit and pull request the changes. The CI check `Linux / Editor with clang sanitizers` should fail with a memory leak.
### Minimal reproduction project (MRP)
See https://github.com/godotengine/godot/pull/96672. | bug,topic:editor,needs testing,topic:tests | low | Critical |
2,511,314,601 | flutter | Feature Request: Switch to pub workspace | We had a engine roll failure this morning followed by a gnarly amount of work to get it started again. As you can see in this commit; there are several (41?!) pubspec.yaml files that had to be touched to roll packages forward:
https://github.com/flutter/flutter/commit/d7a658d70579b55c235096ed4303505e3c77805c
The engine has already switched over to using pub workspace and its been pretty smooth sailing.
| team,P3,c: tech-debt,team-tool,triaged-tool | low | Critical |
2,511,315,036 | flutter | [go_router] `context.push` --> `context.pop` --> `context.push` causes pages to be presented twice OR pop is not removing them | ### Steps to reproduce
This occurs with:
go_router version 14.2.7
Flutter 3.24.2 or 3.25.x (any version of 3.25)
If you programmatically `.push` --> `.pop` --> `.push` (a second time), then the second time you do a push of a screen, **_two_** screens are pushed onto the stack / presented to the user at the same time.
Searching through other issues Ive seen people discuss builder running twice (https://github.com/flutter/flutter/issues/153498). However, I'm not sure that is the issue here. I am wondering if `.pop()` isn't removing the screen from the stack, although it is popped out of view. In this situation when you ask it to push a new one it rebuilds the original page as well to 'restore' the state of the other page that was removed from view
Im not sure about this idea, but never the less you can run the below code to see the problem first hand and see its real and systematically occurs.
Ive put in a console `debugPrint` to show which screens are going through builder and you can get a sense of that as well.
To be clean, this behavior **ONLY** occurs if you programmatically `push` then `pop` then `push` again. If you close a screen using the back button on the top left of the screen or if you drag to close a screen (drag left to right) then the screen is properly popped and this bug won't occur.
Ive attached code below as a demonstrator model, but to reproduce the problem here are some steps
## **WORKING/PROPER OPERATION:**
1. Tap `purple` container to open `Screen B`
2. Either swipe the screen closed or tap the text "Press Here to Close me out!" to close the screen
3. Tap `blue` container to open `Screen C`
4. Either swipe the screen closed or tap the text "Press Here to Close me out!" to close the screen
## **BUGGED OPERATION**
1. Tap `purple` container to open `Screen B`
2. **Tap the house icon on the bottom navigation bar to programmatically close the screen with (`context.pop()`)**. (Importantly, DO NOT tap on "Press here to close me", tap on the house tab bar icon which I've setup to run `context.pop`), the screen should now close (but I theorize the screen still lives in the stack somehow)
3. Tap `blue` container to open `Screen C`
4. Either swipe the screen closed or tap the text "Press Here to Close me out!" to close the screen
5. You will notice that `Screen B` is underneath `Screen C` <--- here is where you can see the bug.
6. If at this point you try to close out the underlying second screen, you will then get the app to hang/freeze
### **Notes**
In bugged operation, you can observe in the console, that when Step 3 is performed, the builder for screen B and C are both run back to back as the debugPrint `***** I AM BUILDING SCREEN B *****` and `***** I AM BUILDING SCREEN C *****` are both presented in the console
### **Extra notes:**
- If I were to use `context.go` then this bug does not occur. However, in my implementation I may have to get to `Screen B` or `Screen C` from any number of places and I would have to expand my route list massively and almost endlessly to account from all the points at which those screen could be opened. Using `.push` is ideal to allow my users to enter those screens at any time they like
- Ive read one suggestion about turning ScreenB/C into `const` widgets to avoid 'rebuilding' them. Turning them into `const` is not viable since I need to feed a bunch of variables to them. In this case for the demonstrator I put a title variable in the builder to demonstrate the concept. Those vars come in as params or extra using context.push or from deep links url pattern
### Expected results
Two screens should not be opened
### Actual results
Two screen are opened
### Code sample
<details open><summary>Code sample</summary>
**main.dart**
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
import 'main_router.dart';
final GlobalKey<NavigatorState> rootNavigatorKey = GlobalKey<NavigatorState>(debugLabel: 'root');
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp.router(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
routerConfig: router,
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Column(
children: [
const Text("Hi i'm screen A (home screen)"),
Container(height:20),
GestureDetector(
onTap: (){
context.pushNamed('screenB');
},
child: Container(
color: Colors.purple,
child: const Text(
'Press Here to goto Screen B',
),
),
),
Container(height:20),
GestureDetector(
onTap: (){
context.pushNamed('screenC');
},
child: Container(
color: Colors.blue,
child: const Text(
'Press Here to goto Screen C',
),
),
),
],
),
],
),
),
);
}
}
class ScreenB extends StatefulWidget {
const ScreenB({super.key, required this.title});
final String title;
@override
State<ScreenB> createState() => _ScreenBState();
}
class _ScreenBState extends State<ScreenB> {
@override
Widget build(BuildContext context) {
return Material(
child: Center(child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
const Text('im screen B!'),
Text('my title is: ${widget.title}'),
Container(height:20),
GestureDetector(
onTap: (){
context.pop();
},
child: const Text(
'Press Here to Close me out!',
),
),
],
)),
);
}
}
class ScreenC extends StatefulWidget {
const ScreenC({super.key, required this.title});
final String title;
@override
State<ScreenC> createState() => _ScreenCState();
}
class _ScreenCState extends State<ScreenC> {
@override
Widget build(BuildContext context) {
return Material(
child: Center(child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
const Text('im screen C!'),
Text('my title is: ${widget.title}'),
Container(height:20),
GestureDetector(
onTap: (){
context.pop();
},
child: const Text(
'Press Here to Close me out!',
),
),
],
)),
);
}
}
```
**main_router.dart**
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
import 'main.dart';
import 'main_router_scaffold.dart';
final GoRouter router = GoRouter(
debugLogDiagnostics: true,
initialLocation:'/',
navigatorKey: rootNavigatorKey,
routes: <RouteBase>[
StatefulShellRoute(
builder: (context, state, navigationShell) {
return navigationShell;
},
navigatorContainerBuilder: (BuildContext context,
StatefulNavigationShell navigationShell,
List<Widget> children) {
return ScaffoldWithNavBarWithState(navigationShell: navigationShell, children: children);
},
branches: <StatefulShellBranch>[
StatefulShellBranch(
routes: [
GoRoute(
path: '/',
builder: (BuildContext context, GoRouterState state) {
var theTitle = "Hello friends";
return MyHomePage(title:theTitle);
},
routes: <RouteBase>[
GoRoute(
path: 'screenb.php',
name: 'screenB',
builder: (BuildContext context, GoRouterState state) {
debugPrint('***** I AM BUILDING SCREEN B *****');
var theTitle = "Hi this is Screen B";
return ScreenB(title: theTitle);
},
),
GoRoute(
path: 'screenc.php',
name: 'screenC',
builder: (BuildContext context, GoRouterState state) {
debugPrint('***** I AM BUILDING SCREEN C *****');
var theTitle = "Hi this is Screen C";
return ScreenC(title: theTitle);
},
),
],
),
]
)
],
)
]
);
```
**main_router_scaffold.dart**
```dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
class ScaffoldWithNavBarWithState extends StatefulWidget {
const ScaffoldWithNavBarWithState({
Key? key,
required this.children,
required this.navigationShell,}) : super(key: key ?? const ValueKey<String>('ScaffoldWithNavBar'));
/// Body, i.e. the index stack
//final Widget body;
final List<Widget> children;
/// The navigation shell and container for the branch Navigators.
final StatefulNavigationShell navigationShell;
@override
State<ScaffoldWithNavBarWithState> createState() => _ScaffoldWithNavBarWithStateState();
}
class _ScaffoldWithNavBarWithStateState extends State<ScaffoldWithNavBarWithState> {
@override
void initState(){
super.initState();
}
@override
Widget build(BuildContext context) {
return
PopScope(
onPopInvoked: (popped) async {
//debugPrint('what!');
},
child: CupertinoTabScaffold(
resizeToAvoidBottomInset: true,
//controller:bottomTabBarController,
restorationId: 'cupertino_tab_scaffold',
key: const ValueKey('tab_scaffold'),
tabBar:
CupertinoTabBar(
items: const <BottomNavigationBarItem>[
BottomNavigationBarItem(icon: Icon(Icons.house), label: 'First Screen'),
BottomNavigationBarItem(icon: Icon(Icons.house), label: 'Second Screen'),
],
currentIndex: widget.navigationShell.currentIndex,
onTap: (int tappedIndex) {
context.pop();
return widget.navigationShell.goBranch(
tappedIndex
//navigatorKey: _bottomNavBranches[tappedIndex].navigatorKey
);
},
),
tabBuilder: (BuildContext context, int index) {
return widget.children[index];
},
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Flutter 3.24.2 • channel stable • https://github.com/flutter/flutter.git
Framework • revision 4cf269e36d (3 days ago) • 2024-09-03 14:30:00 -0700
Engine • revision a6bd3f1de1
Tools • Dart 3.5.2 • DevTools 2.37.2
```
</details>
| package,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.25 | low | Critical |
2,511,316,161 | pytorch | Models that have non-tensor elements in state_dict, are not ONNX-exportable (and not JIT-traceable) | ### 🐛 Describe the bug
I had problems exporting a Transformer-based model to ONNX that had some "_extra_state" keys with None values in the state_dict. Export would break on detach() call below. This simple fix remedied the situation.
```
--- /usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py 2024-09-05 23:35:21.651452023 +0000
+++ /usr/local/lib/python3.10/dist-packages/torch/jit/_trace.bak 2024-07-09 00:37:50.000000000 +0000
@@ -74,7 +74,7 @@
filtered_dict = type(state_dict)()
seen_ids: Set[int] = set()
for k, v in state_dict.items():
- if id(v) in seen_ids or v is None:
+ if id(v) in seen_ids:
continue
seen_ids.add(id(v))
if keep_vars:
### Versions
Pytorch nightly | module: onnx,triaged | low | Critical |
2,511,356,629 | yt-dlp | [Abema] When a video is downloaded, it displays the message "お使いのデバイスでは視聴できないコンテンツです" and the actual video cannot be downloaded. | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Japan
### Provide a description that is worded well enough to be understood
When I try to download a video from the URL below, the video with the message "お使いのデバイスでは視聴できないコンテンツです" is downloaded and I cannot download the actual video.
https://abema.tv/video/episode/199-30_s3_p1
The message means "This content cannot be viewed on your device."
When I open the URL in my browser, the video will start playing normally.
However, there are some URLs that can be downloaded successfully.
This issue only occurs with some videos that contain the URL in question.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://abema.tv/video/episode/199-30_s3_p1', '-o', '.\\tmp.mp4']
[debug] Encodings: locale cp932, fs utf-8, pref cp932, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[AbemaTV] Extracting URL: https://abema.tv/video/episode/199-30_s3_p1
[AbemaTV] Authorizing
[AbemaTV] 199-30_s3_p1: Downloading webpage
[AbemaTV] 199-30_s3_p1: Checking playability
[AbemaTV] 199-30_s3_p1: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: best/bestvideo+bestaudio
[info] 199-30_s3_p1: Downloading 1 format(s): 5300
[debug] Invoking hlsnative downloader on "https://vod-abematv.akamaized.net/program/199-30_s3_p1/1080/playlist.m3u8?aver=1"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 286
[download] Destination: tmp.mp4
[debug] File locking is not supported. Proceeding without locking
[download] 100% of 11.82MiB in 00:00:05 at 2.10MiB/s
WARNING: 199-30_s3_p1: Possible MPEG-TS in MP4 container or malformed AAC timestamps. Install ffmpeg to fix this automatically
```
| DRM,geo-blocked,site-bug,triage | low | Critical |
2,511,360,381 | pytorch | [Inductor][SDPA] `test_sdpa_rewriter_12` broken on A2/A16 GPU | ### 🐛 Describe the bug
One of the pattern-matching tests fails on A2/A16 and likely A10 (untested). Not particularly urgent but I would like to learn more about how to expose which part of Inductor is failing to pattern match on certain cases. Since I suspect this is using pre-compiled FA kernels, my guess is there is a hidden constraint that isn't met somewhere (not in the pattern definition itself) or there is a runtime failure that is caught and suppressed.
For working configs it seems to match to pattern 11 (not 12):
https://github.com/pytorch/pytorch/blob/a6b9d444fbd2ddcf8481ea6adbac08de5443f1fe/torch/_inductor/fx_passes/fuse_attention.py#L277
Is there a way to get the Inductor logs that would be relevant here?
CC @drisspg
### Versions
~week old main branch
cc @ptrblck @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ezyang @drisspg @mikaylagawarecki | module: cuda,triaged,oncall: pt2,module: inductor,module: sdpa | low | Critical |
2,511,380,616 | pytorch | DataLoader + IterableDataset held up by slowest worker!? | ### 🐛 Describe the bug
Greetings!
I am using a PT DataLoader with an IterableDataset and num_workers > 1 to speed up the slow preparation of individual training samples. My expectation was that the DataLoader would assemble batches from any worker's samples as soon as they become available. Instead I discovered that the DataLoader waits until each worker in turn produces an entire batch of data. In a situation where samples take a variable amount of time to produce (e.g. load) the slowest worker process holds up training even if other workers are ready to deliver samples. This feels like a bug to me. Or is it a feature?!
```python
import time
import torch
class TestDset(torch.utils.data.IterableDataset):
def __init__(self,Data):
super().__init__()
self.Data = Data
def __iter__(self):
self.data = self.Data.copy()
self.worker = torch.utils.data.get_worker_info()
if self.worker is not None:
self.data = self.data[self.worker.id::self.worker.num_workers]
for n in self.data:
time.sleep( self.worker.id/2 )
yield n
Dload = torch.utils.data.DataLoader( TestDset(list(range(32))), batch_size=8, shuffle=None, num_workers=2)
for n in Dload: print(n)
```
+++++ Output +++++
```
tensor([ 0, 2, 4, 6, 8, 10, 12, 14])
... delay ...
tensor([ 1, 3, 5, 7, 9, 11, 13, 15])
tensor([16, 18, 20, 22, 24, 26, 28, 30])
... delay ...
tensor([17, 19, 21, 23, 25, 27, 29, 31])
```
### Versions
AWS Sagemaker Linux
Python 3.11
PyTorch 2.3.0
cc @andrewkho @gokulavasan @SsnL @VitalyFedyunin @dzhulgakov | module: dataloader,triaged | low | Critical |
2,511,383,906 | pytorch | [Dynamo] Eager fallback casued by graph breaks in module hooks | ### 🐛 Describe the bug
In deepspeed workloads, Zero parameter offload is implemented via module hooks. We find under torch.compile scenrio, if there any graph breaks happen in the pre/post hook of a module, the module will be fallbacked to the eager mode. Is it expected? It's currently hard to fix all the graph breaks in the hooks and is it possible to make the module stay in fx graph(only fallback the hooks)?
### Error logs
check the dynamo/inductor logs.
### Minified repro
```python
import os
os.environ["TORCH_COMPILE_DEBUG"] = "1"
os.environ["TORCHDYNAMO_VERBOSE"] = "1"
os.environ["TORCH_LOGS"] = "+dynamo,guards,graph_code"
import torch
class MyModule(torch.nn.Module):
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self.fc0 = torch.nn.Linear(256, 256, bias=False)
self.fc1 = torch.nn.Linear(256, 256, bias=False)
self.dropout = torch.nn.Dropout(0.5)
def forward(self, data, residual):
output = residual + self.fc1(self.fc0(self.dropout(data))) * 0.5
return output
my_mod = MyModule()
@torch._dynamo.disable()
def fn(t):
return t.sin()
def collect(m, i, o):
tmp = fn(o)
return tmp.add(1)
my_mod.register_forward_hook(collect)
for child in my_mod.children():
child.register_forward_hook(collect)
my_mod = torch.compile(my_mod, fullgraph=False)
atensor = torch.rand(256, 256, dtype=torch.float)
residual = torch.rand(256, 256, dtype=torch.float)
out = my_mod(atensor, residual)
```
### Versions
```
PyTorch version: 2.5.0.dev20240729
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.3.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.0
Libc version: N/A
Python version: 3.8.18 (default, Sep 11 2023, 08:17:16) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] onnx==1.16.1
[pip3] onnx2torch==1.5.14
[pip3] optree==0.11.0
[pip3] pytorch-lightning==1.7.7
[pip3] torch==2.5.0a0+git1f961ad
[pip3] torchaudio==2.4.0.dev20240729
[pip3] torchmetrics==0.10.0
[pip3] torchvision==0.12.0
[conda] numpy 1.22.3 pypi_0 pypi
[conda] numpy-base 1.24.3 py38h90707a3_0
[conda] onnx2torch 1.5.14 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] pytorch-lightning 1.7.7 pypi_0 pypi
[conda] torch 1.12.0.dev20220518 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240729 pypi_0 pypi
[conda] torchmetrics 0.10.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
```
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,oncall: pt2,module: dynamo | low | Critical |
2,511,397,223 | godot | Can't create AtlasTexture in shader | ### Tested versions
v4.4.dev1.mono.official [28a72fa43]
### System information
Godot v4.4.dev1.mono - Windows 10.0.17763 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 (NVIDIA; 31.0.15.2849) - AMD Ryzen 9 5950X 16-Core Processor (32 Threads)
### Issue description
below
### Steps to reproduce
just make a shader like this:

the try make a AtlasTexture:

It can't create.
### Minimal reproduction project (MRP)
none | bug,topic:editor | low | Minor |
2,511,447,117 | rust | doc alias set_current_dir as cd | as someone who has never called this function except from a shell, this was a bit difficult to find. | C-enhancement,T-libs-api | low | Minor |
2,511,481,900 | rustdesk | Ubuntu 20.04 Black Screen w/ Cursor Control | ### Bug Description
Hello,
We have two identically specced lab computers and one of them functions flawlessly with RustDesk while the other shows as a black screen in the client but cursor and keyboard control works perfectly fine, and the host cursor is visible on the client. Verbose logging when running RustDesk in the terminal does not differ on both hosts.
### How to Reproduce
Connect to one of our computers. I'd be happy to email you the id and code if needed to troubleshoot.
### Expected Behavior
Be able to see the screen when using RustDesk.
### Operating system(s) on local side and remote side
Host: Ubuntu, Clients: Mac, Windows, Ubuntu
### RustDesk Version(s) on local side and remote side
1.3.0 on both
### Screenshots
<img width="1335" alt="image" src="https://github.com/user-attachments/assets/e047f469-d745-440f-b6df-4df6de7cbfa8">
### Additional Context
_No response_ | bug | low | Critical |
2,511,517,165 | go | cmd/go: `go list -json -e` does not store location for invalid import path | ### Go version
go version go1.23.1 linux/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/home/ayke/.cache/go-build'
GOENV='/home/ayke/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/ayke/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/ayke'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go1.23.1'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go1.23.1/pkg/tool/linux_arm64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/ayke/.config/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3054841504=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
I have the following invalid Go file
```go
package main
import _ "#"
func main() {
}
```
I ran `go list -json -e importpath.go` on it, to see the loader error.
### What did you see happen?
The JSON contains this part:
```json
"Error": {
"ImportStack": [],
"Pos": "",
"Err": "/home/ayke/tmp/importpath.go:3:8: invalid import path: #"
}
```
That's an error, of course, but the `Pos` field is not set even though `go list` clearly knows the source location (it's stored in the error message itself).
### What did you expect to see?
I expected an output like this:
```json
"Error": {
"ImportStack": [],
"Pos": "importpath.go:3:8",
"Err": "invalid import path: #"
}
```
That is, the position information should be in `Pos` and the path should be relative and not absolute. For example, when setting the import path to one that doesn't exist, the error looks like this:
```json
"Error": {
"ImportStack": [
"command-line-arguments"
],
"Pos": "importpath.go:3:8",
"Err": "no required module provides package foo.bar: go.mod file not found in current directory or any parent directory; see 'go help modules'"
}
``` | NeedsInvestigation | low | Critical |
2,511,522,437 | godot | Android Editor and project page unresponsive to touch but works with mouse | ### Tested versions
Reproducible and tested in : Godot 4.3+ android
### System information
Godot v4.4.dev1 - Android - Vulkan (Mobile) - integrated Mali-G57 - (8 Threads)
### Issue description
When opening and creating a new project touchscreen works perfectly fine with the editor for a couple of days then it stops completely working and need to use otg with mouse and keyboard . This doesn't happen on my computer
### Steps to reproduce
As of for what I've seen: create a project, and when it happens it happens... I've followed Coco code's Godot tutorials.
### Minimal reproduction project (MRP)
N/A in all new projects created | platform:android,needs testing,topic:input | low | Minor |
2,511,560,879 | ui | [feat]: ability to create nextjs project with page/app router | ### Feature description
when we are creating an app using `npx shadcn@latest init` please let us choose from app router or page router. cause, the app router is still not production ready for so many use cases.
thanks a million
### Affected component/components
_No response_
### Additional Context
_No response_
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,511,572,092 | material-ui | Icons for well-known and widespread applications | ### Summary
I'm using Material UI for all of my panel icons. I wanted to create a feature called **Export to Excel**. I did not find an icon representing Excel. or spreadsheet in general.
Please add icons for well-known apps. Or if that's not feasible, add icons for well-known app ideas at least.
1. Spreadsheet
2. Presentation
3. Document editor
4. PDF
5. ...
### Examples
_No response_
### Motivation
_No response_
**Search keywords**: excel | support: question,package: icons,enhancement | low | Minor |
2,511,589,738 | godot | 3D Camera jitter/blurry environment | ### Tested versions
- 4.3 C#
- 4.4 dev 1 C#
### System information
Windows 11 - v4.3, v4.4 dev 1
### Issue description
Hello, I had to halt my game project recently because I noticed a weird bug when I added a lot of grass in my game.
The environment (terrain, grass, houses) become a little bit blurry when moving horizontally, but seems perfect when moving vertically.
I tried EVERYTHING, camera inside character, outside character, 3D physics interpolation, no interpolation, high/low fps, high/low tickrate, nothing fix this bug. At this point honeslty I just suspect a bug in godot/navigation agent 3D
You can see a video here (this is a sample project where i started from scratch to reproduce it).
This is a simple scene, not a lot of grass, but in a real environment its just worse.
https://youtu.be/rHdB47K46ck
Im using Godot 4.3, I tried Godot 4.4, same issue.
I can't believe this is normal behavior, the blurry effect when moving on z axis is cleary visible in real time.
The scene:

Thanks.
### Steps to reproduce
- run the main scene
- move vertically, grass is OK
- move horizontally, grass is blurry/jittery
### Minimal reproduction project (MRP)
[movement-debug-4.4.zip](https://github.com/user-attachments/files/16917627/movement-debug-4.4.zip)
| bug,topic:rendering,topic:3d | low | Critical |
2,511,589,914 | rust | [BUG] `llvm-cov` warning `mismatched data` when double slash comment above `use` | # bug
`llvm-cov` `warning: 1 functions have mismatched data` caused by comment line
## background
Found `llvm-cov` `warning: N functions have mismatched data`.
After debug, very suprisingly find out this may all caused by comment line.
## actual
`llvm-cov` `warning: 1 functions have mismatched data` caused by comment line `// foo`
crate A
```rs
#![feature(str_from_raw_parts)]
// foo
use core::str::from_raw_parts;
/// # Safety
///
/// TODO
#[inline]
#[must_use]
pub const unsafe fn str_from_raw_parts<'a>(ptr: *const u8, len: usize) -> &'a str {
from_raw_parts(ptr, len)
}
```
crate B
```rs
use feature_str_from_raw_parts_util::str_from_raw_parts;
#[test]
fn should_ok() {
let x = unsafe { str_from_raw_parts("foobar".as_ptr(), 3) };
let _ = x;
```
```sh
$ yarn cleanup:everything && yarn test:coverage
...
+ /path/to/llvm-cov report ...
warning: 1 functions have mismatched data
...
```
## expected
comment line `// foo` should not lead to `llvm-cov` `warning: 1 functions have mismatched data`
crate A remove comment line
```rs
#![feature(str_from_raw_parts)]
use core::str::from_raw_parts;
/// # Safety
///
/// TODO
#[inline]
#[must_use]
pub const unsafe fn str_from_raw_parts<'a>(ptr: *const u8, len: usize) -> &'a str {
from_raw_parts(ptr, len)
}
```
```sh
$ yarn cleanup:everything && yarn test:coverage
...
(llvm-cov no warning)
...
```
## version
```log
rustc 1.82.0-nightly (1f12b9b0f 2024-08-27)
binary: rustc
commit-hash: 1f12b9b0fdbe735968ac002792a720f0ba4faca6
commit-date: 2024-08-27
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
``` | A-codegen,T-compiler,C-bug,requires-nightly,A-code-coverage,S-has-mcve | low | Critical |
2,511,602,896 | godot | Blender file NLA track import glitches, no blend shapes | ### Tested versions
- Tested in Godot v4.3.stable with Blender 4.2.1
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA RTX A2000 (NVIDIA; 31.0.15.5186) - Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz (8 Threads)
### Issue description
Created a mesh, rigged with rigify, created animation, creaded blend shapes/shape keys, added blend file to godot project.
- Animation imported and shows up and plays but the mesh is deforming not like in blender
- Blend shapes are not accessible.
- 
Did the same but exported to glb file with NLA tracks, added to godot project:
- Still same problem with animation where mesh deforming not like it is in blender.
- Blend shapes are accessible.
- 
Godot:

https://streamable.com/iao59w
Blender:

https://streamable.com/5rcb9i
### Steps to reproduce
Create mesh in blender, rigify, create animation, create blend shapes, import blend file to godot project.
### Minimal reproduction project (MRP)
[murger-43.zip](https://github.com/user-attachments/files/16917828/murger-43.zip)
| bug,topic:import,topic:3d | low | Major |
2,511,621,565 | transformers | Support Unified Multimodal Model | ### Feature request
Hi, I am wondering that can this repository supports the unified multimodal model like Show-o? [https://github.com/showlab/Show-o](https://github.com/showlab/Show-o)
### Motivation
The unified multimodal model may be a trend with multimodality
### Your contribution
trying for integration | New model,Feature request,Multimodal | low | Minor |
2,511,728,408 | neovim | option error message should mention the option name, etc | ### Problem
Setting an invalid filetype via lua produces an nondescript E474 saying:
```
E5108: Error executing lua [string ":lua"]:1: E474: Invalid argument
stack traceback:
[C]: in function '__newindex'
[string ":lua"]:1: in main chunk
```
### Steps to reproduce
`nvim --clean +'lua vim.bo.filetype = "foo bar"'`
### Expected behavior
The error message should at least mention that it's the `filetype` option which was being set. Ideally it would also mention something about what was invalid, but that seems like asking for more than what's needed.
Compare what you get when setting this option from vimscript:
```
⊙ nvim --clean +'set filetype=foo\ bar'
```
which produces:
```
E474: Invalid argument: filetype=foo\ bar
```
I gave a quick look at modifying `optionstr.c` to:
```diff
diff --git a/src/nvim/optionstr.c b/src/nvim/optionstr.c
index 8e853b6ee..4917dd473 100644
--- a/src/nvim/optionstr.c
+++ b/src/nvim/optionstr.c
@@ -1308,7 +1308,8 @@ const char *did_set_filetype_or_syntax(optset_T *args)
char **varp = (char **)args->os_varp;
if (!valid_filetype(*varp)) {
- return e_invarg;
+ semsg(_(e_invarg2), *varp);
+ return NULL;
}
args->os_value_changed = strcmp(args->os_oldval.string.data, *varp) != 0;
```
but this isn't enough it seems (partially because I'm not sure where in the struct the name of the argument is, considering that callback is called for multiple options).
But I'm filing this also because this seems like something *generic* perhaps, that should happen for all of the `return e_invarg`s in that file (they should include the option name), so filing this to hear feedback on that, and to hear whether it exists somewhere I didn't see in my quick look.
### Neovim version (nvim -v)
v0.10.1, reproducible on main
### Vim (not Nvim) behaves the same?
no / n/a
### Operating system/version
macOS 14.6.1
### Terminal name/version
kitty 0.36.1
### $TERM environment variable
xterm-kitty
### Installation
homebrew | enhancement,ux,options,messages | low | Critical |
2,511,760,523 | PowerToys | Workspaces Capture Won't Pick Up All Apps | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update, Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Open up the following apps:
Visual Studio Code
DevToys
Terminal
Git GUI
With all windows displayed (i.e. not minimized) Launch Workspaces, then Create Workspace. The Capture dialog is displayed but GitHub GUI is not "captured".
### ✔️ Expected Behavior
All open Windows are captured.
### ❌ Actual Behavior
Git GUI is not captured. The other three are.
### Other Software
Git Gui | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,511,763,701 | godot | Moving file referenced in resources generates .depren files | ### Tested versions
Reproducible in:
- 4.2.1 (although it complains when moving the file, still makes the same depren file)
- 4.3-stable
- 4.4-dev1
The message from 4.2.1:

4.3 & 4.4 are silent.
### System information
Godot v4.4.dev1 - Manjaro Linux #1 SMP PREEMPT_DYNAMIC Mon Aug 19 09:51:26 UTC 2024 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 Laptop GPU - AMD Ryzen 9 5900HX with Radeon Graphics (16 Threads)
### Issue description
When moving files around that are part of a resource, Godot creates an extra `.depren` file.
Deleting this file doesn't seem to do/break anything? I think if this is part of some internal process for moving files, that it didn't clean up after itself.
It seems that [this bug](https://github.com/godotengine/godot/issues/60412) is back?
### Steps to reproduce
- Create a New Resource, select Sprites Frames
- Add frames using `icon.svg`
- Save the resource as `icon.res`
- Move `icon.svg` to a different directory
- Using the file explorer to check the root directory of the project, extra file appeared: `icon.res.depren`
Deleting this file seems to have no effect on the project.
### Minimal reproduction project (MRP)
Open this project in Godot: [depren.zip](https://github.com/user-attachments/files/16918684/depren.zip)
In the editor, select the `icon.svg` and drag it into the `asset` directory.
Open your file explorer and the root of the project now has an extra file: `icon.res.depren` | bug,topic:editor,needs testing | low | Critical |
2,511,806,595 | flutter | Popped route still exist in remove predicate while using `navigator.pushAndRemoveUntil` after await `navigator.push` | ### Steps to reproduce
1. Run the sample code
2. Click the middle button
3. Check console output
### Expected results
flutter: route: page_two
### Actual results
flutter: route: page_three
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key});
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
void _onPressed(BuildContext context) async {
final navigatorState = Navigator.of(context);
navigatorState.push(
MaterialPageRoute(
builder: (_) => const PageOne(),
settings: const RouteSettings(name: 'page_one'),
),
);
navigatorState.push(
MaterialPageRoute(
builder: (_) => const PageTwo(),
settings: const RouteSettings(name: 'page_two'),
),
);
navigatorState.push(
MaterialPageRoute(
builder: (_) => const PageThree(),
settings: const RouteSettings(name: 'page_three'),
),
);
//Current route is [page_one, page_two, page_three]
await Future.delayed(const Duration(seconds: 1));
navigatorState.pop();
//Current route is [page_one, page_two]
navigatorState.pushAndRemoveUntil(
MaterialPageRoute(
builder: (_) => const PageOne(),
settings: const RouteSettings(name: 'page_one'),
),
(route) {
// Expect output should be "route: page_two", but actual is "route: page_three",
print('route: ${route.settings.name}');
return true;
},
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
),
body: Builder(builder: (context) {
return Center(
child: TextButton(
onPressed: () => _onPressed(context),
child: const Text("Click Me", style: TextStyle(fontSize: 30),),
),
);
}),
);
}
}
class PageOne extends StatelessWidget {
const PageOne({super.key});
@override
Widget build(BuildContext context) {
return const ColoredBox(color: Colors.blue);
}
}
class PageTwo extends StatelessWidget {
const PageTwo({super.key});
@override
Widget build(BuildContext context) {
return const ColoredBox(color: Colors.red);
}
}
class PageThree extends StatelessWidget {
const PageThree({super.key});
@override
Widget build(BuildContext context) {
return const ColoredBox(color: Colors.yellow);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on macOS 14.6.1 23G93 darwin-arm64, locale zh-Hans-CN)
• Flutter version 3.24.1 on channel stable at /Users/tangkailiang/fvm/versions/3.24.1
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (3 weeks ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
• Pub download mirror https://pub.flutter-io.cn
• Flutter download mirror https://storage.flutter-io.cn
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/tangkailiang/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.2)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin version 81.1.3
• Dart plugin version 242.20629
[✓] VS Code (version 1.92.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.94.0
[✓] Connected device (5 available)
• iPhone (mobile) • 00008120-001451993C90A01E • ios •
iOS 17.5.1 21F90
• iPhone 15 (mobile) • 43BA2B9D-D219-4321-AD2D-EBAB3CC7B95E • ios •
com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
• macOS (desktop) • macos • darwin-arm64 •
macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin •
macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript •
Google Chrome 128.0.6613.120
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,f: routes,P2,team-framework,triaged-framework | low | Major |
2,511,843,365 | ui | [bug]: Dialog and Sheet have no background | ### Describe the bug
Dialog and Sheet has no background
### Affected component/components
Sheet, Dialog
### How to reproduce
1. Create new project `npx shadcn@latest init dialog`
2. Replace `page.tsx' with the following code
```tsx
import { Dialog, DialogContent, DialogDescription, DialogHeader, DialogTitle, DialogTrigger, } from "@/components/ui/dialog"
export default function Home() {
return (
<Dialog>
<DialogTrigger asChild>
<div>Open</div>
</DialogTrigger>
<DialogContent className="sm:max-w-md">
<DialogHeader>
<DialogTitle>Test title</DialogTitle>
<DialogDescription>
Test description
</DialogDescription>
</DialogHeader>
<div>Test content</div>
</DialogContent>
</Dialog>
)
}
```
3. Run application and open the dialog
4. See the following result:

Temporary fix:
I modified the background class in the dialog component from bg-background to bg-[var(--background)], and this corrected the issue. It seems like the default background variable is not being applied correctly in some cases.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Chrome, Windows
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,511,852,909 | rust | Should public_private_dependencies consider unreachable visibility? | I tried this code:
```rust
mod foo {
pub fn example() -> regex::Regex {
regex::Regex::new("test").unwrap()
}
}
pub fn x() {
foo::example();
}
```
with cargo's [public-dependency](https://doc.rust-lang.org/cargo/reference/unstable.html#public-dependency) feature enabled, and `regex` is a private dependency.
I expected to see this happen: No warning
Instead, this happened: Generated a warning about the dependency in a public interface, but there is no exposure of the dependency in the public interface.
```
warning: type `regex::Regex` from private dependency 'regex' in public interface
--> src/lib.rs:2:5
|
2 | pub fn example() -> regex::Regex {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(exported_private_dependencies)]` on by default
```
In https://github.com/rust-lang/rust/issues/44663#issuecomment-1552721144, bjorn3 mentioned:
> We consider any `pub` item to be public, even if not actually reachable. This is also why for example `mod sealed { pub trait Sealed {} } pub trait MyTrait: Sealed {}` is allowed despite `Sealed` not being reachable.
However, I'm not sure I completely agree with that reasoning. In the example above, there is no exposure of the private dependency in any types. This would make more sense if [`unreachable-pub`](https://doc.rust-lang.org/nightly/rustc/lints/listing/allowed-by-default.html#unreachable-pub) was on by default, but it's not. Although I can sympathize that unreachable `pub` is probably bad form, it is very common in Rust code and would be a significant hurdle for false-positives of `exported_private_dependencies`.
### Meta
```
rustc 1.83.0-nightly (9c01301c5 2024-09-05)
``` | T-lang,T-compiler,C-bug,F-public_private_dependencies,L-public_private_dependencies | low | Minor |
2,511,880,698 | flutter | `DecoratedBox`: clip behavior + other improvements | **Container** is a somewhat "heavy" widget (see https://github.com/flutter/flutter/issues/147431), so it'd be great if **DecoratedBox** could fully decorate a box by itself (as its name would imply).
- [**PhysicalShape**](https://main-api.flutter.dev/flutter/widgets/PhysicalShape-class.html) allows setting a `clipBehavior`; **DecoratedBox** should as well.
- **Decoration** objects have intrinsic `padding`; maybe there should be an option to apply this padding.
- And perhaps the **BoxShadow** API has some room for improvement, though at this point I'm not sure if there's a clear path forward. | c: new feature,framework,c: proposal,P3,team-framework,triaged-framework | low | Minor |
2,511,944,807 | rust | diagnostic::on_unimplemented fails to trigger | ### Code
```Rust
#[diagnostic::on_unimplemented(
message = "message",
)]
pub trait ProviderLt {}
pub trait ProviderExt {
fn request<R>(&self) {
todo!()
}
}
impl<T: ?Sized + ProviderLt> ProviderExt for T {}
struct A<'a>(&'a ());
struct B; // works
fn main() {
A(&()).request();
B.request(); // works
}
```
### Current output
```Shell
error[E0599]: the method `request` exists for struct `A<'_>`, but its trait bounds were not satisfied
--> src/main.rs:19:12
|
14 | struct A<'a>(&'a ());
| ------------ method `request` not found for this struct because it doesn't satisfy `A<'_>: ProviderExt` or `A<'_>: ProviderLt`
...
19 | A(&()).request();
| ^^^^^^^ method cannot be called on `A<'_>` due to unsatisfied trait bounds
|
note: trait bound `A<'_>: ProviderLt` was not satisfied
--> src/main.rs:12:18
|
12 | impl<T: ?Sized + ProviderLt> ProviderExt for T {}
| ^^^^^^^^^^ ----------- -
| |
| unsatisfied trait bound introduced here
note: the trait `ProviderLt` must be implemented
--> src/main.rs:4:1
|
4 | pub trait ProviderLt {}
| ^^^^^^^^^^^^^^^^^^^^
= help: items from traits can only be used if the trait is implemented and in scope
note: `ProviderExt` defines an item `request`, perhaps you need to implement it
--> src/main.rs:6:1
|
6 | pub trait ProviderExt {
| ^^^^^^^^^^^^^^^^^^^^^
error[E0599]: message
--> src/main.rs:21:7
|
16 | struct B; // works
| -------- method `request` not found for this struct because it doesn't satisfy `B: ProviderExt` or `B: ProviderLt`
...
21 | B.request(); // works
| ^^^^^^^ method cannot be called on `B` due to unsatisfied trait bounds
|
note: trait bound `B: ProviderLt` was not satisfied
--> src/main.rs:12:18
|
12 | impl<T: ?Sized + ProviderLt> ProviderExt for T {}
| ^^^^^^^^^^ ----------- -
| |
| unsatisfied trait bound introduced here
note: the trait `ProviderLt` must be implemented
--> src/main.rs:4:1
|
4 | pub trait ProviderLt {}
| ^^^^^^^^^^^^^^^^^^^^
= help: items from traits can only be used if the trait is implemented and in scope
note: `ProviderExt` defines an item `request`, perhaps you need to implement it
--> src/main.rs:6:1
|
6 | pub trait ProviderExt {
| ^^^^^^^^^^^^^^^^^^^^^
For more information about this error, try `rustc --explain E0599`.
```
### Desired output
```Shell
The first error should also be using the "message" message like the second one.
```
### Rationale and extra context
Playground link: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=f9313422f1b0da95507832b37b234c30
### Other cases
_No response_
### Rust Version
```Shell
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
### Anything else?
_No response_ | A-diagnostics,T-compiler,D-diagnostic-infra | low | Critical |
2,511,950,468 | godot | Testing XR game without a headset connected produces two errors and one warning | ### Tested versions
Reproducible in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 32.0.15.6081) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 Threads)
### Issue description
If you launch an XR project from the editor, but don't have a headset connected, the console will show two errors and a warning. The warning has been there for a long time and is expected, but the two errors are new, internal (native), and unhelpful.

### Steps to reproduce
Launch any XR project from the editor, without a headset connected.
### Minimal reproduction project (MRP)
N/A | bug,topic:xr | low | Critical |
2,511,979,519 | PowerToys | Add a "reset shortcuts" button in settings. | ### Description of the new feature / enhancement
Resets shortcuts to default.
### Scenario when this would be used?
When you changed the shortcuts too much 😩
### Supporting information
_No response_ | Needs-Triage,Needs-Team-Response | low | Minor |
2,511,997,331 | ollama | Is everything fine with `phi3` model? | ### What is the issue?
I downloaded model 3 moths ago and it worked fine, but now it doesn't work at all.
My query is `generate 20 non-existing random English-sounding nouns, less than 6 sylables`. Previously it just generated words without descriptions as expected, now with them.
When I substitute "English" with "Polish", it goes into an infinite loop and when I put "German", it start to spill out UUIDs.
Example of Polish output:
```
1. Krzeszinski
2. Szmaragdowa
3. Złotyka
4. Pomocnicza
5. Wesołeńca
6. Jędrzejki
7. Kartwinka
8. Chrobotnica
9. Skrępijny
1 end. 20 nouns generated successfully! Now, let's shuffle them:
Shuffled List (Randomized):
4. Pomocnicza
6. Jędrzejki
7. Kartwinka
3. Złotyka
9. Skrępijny
1 end. 20 nouns generated successfully! Now, let's shuffle them:
Shuffled List (Randomized):
(and it repeats forever)
```
Example of German output:
```
1. Torgelichtweisenheit
... (8 another words correctly generated)
10. Sonnenfinsternistränenqualm
1de25af6-bb4a-3c17-bf8a-9d6e989e3ecc_GermanSoundingNouns=nonExistingWordsList=[Torgelichtweisenheit,Fuchsbärennachtfrost,Himmelspechvogelzunge,...,Sonnenfinsternistränenqualm1de25af6-bb4a-3c17-bf8a-9d6e989e3ecc_GermanSoundingNouns=nonExistingWordsList=[Torgelichtweisenheit,Fuchsbärennachtfrost,Himmelspechvogelzunge,...,de25af6-bb4a-3c17-bf8a-9d6e989e3ecc]
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
ollama version is 0.3.9
previous ollama version was 0.2.3
logs: [ollama.log](https://github.com/user-attachments/files/16919885/ollama.log) | bug | low | Major |
2,512,011,806 | animate.css | HowTo fadeIn animation | ### Describe The Bug
I have an element that should start with 0 opacity:
#fldStatus {
opacity: 0.0;
animation: fadeIn;
animation-delay: 3000ms;
animation-duration: 2000ms;
width: 100%;
display: inline-grid;
padding: 2px 0px 0px 0px;
margin: 0px;
}
this works fine but after the animation completes, the element reverts back to opacity:0
### Steps To Reproduce
_No response_
### Expected Behavior
Please explain the fadeIn animations for dopes like me :)
### Screenshots
_No response_
### Desktop
_No response_
### Smartphone
_No response_
### Additional Context
_No response_ | bug | low | Critical |
2,512,013,583 | rust | Apple arm64e targets fail to link with newer Xcode 15 | See tracking issue for these targets in https://github.com/rust-lang/rust/issues/73628.
Building a project using the `arm64e-apple-ios` target fails to link when using Xcode 15.4. Using Xcode 14.3.1 works.
This might also be the case for `arm64e-apple-darwin`, but I can't test that due to https://github.com/rust-lang/cc-rs/issues/1205.
```console
$ cargo new foo && cd foo && cargo +nightly build --target=arm64e-apple-ios -Zbuild-std
// Or
$ ./x test --target=arm64e-apple-ios
```
The exact error is:
```
= note: ld: warning: search path '$HOME/.rustup/toolchains/nightly-2024-01-15-aarch64-apple-darwin/lib/rustlib/arm64e-apple-ios/lib' not found
ld: warning: search path '$HOME/.rustup/toolchains/nightly-2024-01-15-aarch64-apple-darwin/lib/rustlib/arm64e-apple-ios/lib' not found
ld: warning: ignoring file '/private/var/folders/0j/tk3sfgz540712zgqd1hrry0m0000gn/T/rustcMbh3OI/symbols.o': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/foo-fd00ad1dff52af6b.26sqj2knsb351po8.rcgu.o': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/foo-fd00ad1dff52af6b.3mx9ar89kmsx5j1l.rcgu.o': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/foo-fd00ad1dff52af6b.45hdsivk0rjaalqm.rcgu.o': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/foo-fd00ad1dff52af6b.44zwmgr66v8ihk0u.rcgu.o': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libpanic_unwind-f79c716b594cd87d.rlib[3](panic_unwind-f79c716b594cd87d.panic_unwind.b0d5fa77f79efb86-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libpanic_unwind-f79c716b594cd87d.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/foo-fd00ad1dff52af6b.5dkwtrczm5qjphu6.rcgu.o': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/foo-fd00ad1dff52af6b.42cln6of8os0f9gy.rcgu.o': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/foo-fd00ad1dff52af6b.3d02ymmiyeu3vpo7.rcgu.o': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libaddr2line-fb1df4cdd978991d.rlib[3](addr2line-fb1df4cdd978991d.addr2line.b32b3d1578f40f68-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libaddr2line-fb1df4cdd978991d.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/librustc_demangle-070595bb373ffce1.rlib[4](rustc_demangle-070595bb373ffce1.rustc_demangle.319f1cf397a9ecca-cgu.1.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/librustc_demangle-070595bb373ffce1.rlib[3](rustc_demangle-070595bb373ffce1.rustc_demangle.319f1cf397a9ecca-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd_detect-72eccc55aeb6dc6e.rlib[3](std_detect-72eccc55aeb6dc6e.std_detect.b917a1edde698b77-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/foo-fd00ad1dff52af6b.529zhzjdbavai3fn.rcgu.o': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libadler-2503ee2f726d439b.rlib[3](adler-2503ee2f726d439b.adler.9aa407738f930ee2-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libminiz_oxide-d07b686f8d54a430.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libmemchr-075960409c25bbb9.rlib[3](memchr-075960409c25bbb9.memchr.adf4ce8aa7930ef2-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/librustc_demangle-070595bb373ffce1.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/librustc_std_workspace_alloc-d4a4490e1effb483.rlib[3](rustc_std_workspace_alloc-d4a4490e1effb483.rustc_std_workspace_alloc.2e3ab25734ed68d6-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libadler-2503ee2f726d439b.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libhashbrown-e6b966447a347358.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libminiz_oxide-d07b686f8d54a430.rlib[3](miniz_oxide-d07b686f8d54a430.miniz_oxide.c3e4f4e376ceb04b-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/librustc_std_workspace_alloc-d4a4490e1effb483.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libunwind-8017996f774c6593.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libunwind-8017996f774c6593.rlib[3](unwind-8017996f774c6593.unwind.e4507e56c8db69a9-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libhashbrown-e6b966447a347358.rlib[3](hashbrown-e6b966447a347358.hashbrown.2979a5dc0df80385-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libmemchr-075960409c25bbb9.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libgimli-9049beec931fd184.rlib[7](gimli-9049beec931fd184.gimli.291f80e1c06160d3-cgu.4.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libgimli-9049beec931fd184.rlib[6](gimli-9049beec931fd184.gimli.291f80e1c06160d3-cgu.3.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd_detect-72eccc55aeb6dc6e.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/librustc_std_workspace_core-57c9b9461bb0a02d.rlib[3](rustc_std_workspace_core-57c9b9461bb0a02d.rustc_std_workspace_core.2cc70b459bc5d55b-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcfg_if-ab04255e62dd4d18.rlib[3](cfg_if-ab04255e62dd4d18.cfg_if.9512f2fb5bcdd678-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libobject-8770e67c8539abe4.rlib[5](object-8770e67c8539abe4.object.58425892dede8931-cgu.2.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/liblibc-e3f8a8bc867e7d2f.rlib[3](libc-e3f8a8bc867e7d2f.libc.565c562bf8166800-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/liblibc-e3f8a8bc867e7d2f.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libgimli-9049beec931fd184.rlib[5](gimli-9049beec931fd184.gimli.291f80e1c06160d3-cgu.2.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcompiler_builtins-ce1cb3c3fce0a16d.rlib[5](compiler_builtins-ce1cb3c3fce0a16d.compiler_builtins.6948197a790d2fa7-cgu.2.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcfg_if-ab04255e62dd4d18.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libgimli-9049beec931fd184.rlib[4](gimli-9049beec931fd184.gimli.291f80e1c06160d3-cgu.1.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libgimli-9049beec931fd184.rlib[3](gimli-9049beec931fd184.gimli.291f80e1c06160d3-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/liballoc-85c3b697c47ff844.rlib[6](alloc-85c3b697c47ff844.alloc.2bb4af1538dd142e-cgu.3.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[18](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.15.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libobject-8770e67c8539abe4.rlib[4](object-8770e67c8539abe4.object.58425892dede8931-cgu.1.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[17](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.14.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcompiler_builtins-ce1cb3c3fce0a16d.rlib[4](compiler_builtins-ce1cb3c3fce0a16d.compiler_builtins.6948197a790d2fa7-cgu.1.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcompiler_builtins-ce1cb3c3fce0a16d.rlib[3](compiler_builtins-ce1cb3c3fce0a16d.compiler_builtins.6948197a790d2fa7-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libobject-8770e67c8539abe4.rlib[3](object-8770e67c8539abe4.object.58425892dede8931-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[16](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.13.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcompiler_builtins-ce1cb3c3fce0a16d.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libobject-8770e67c8539abe4.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libgimli-9049beec931fd184.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/librustc_std_workspace_core-57c9b9461bb0a02d.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[15](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.12.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[14](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.11.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[13](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.10.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[12](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.09.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[11](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.08.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[10](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.07.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[9](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.06.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[8](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.05.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[7](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.04.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[6](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.03.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[5](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.02.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[4](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.01.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[3](std-e730827ac25e8eb8.std.390e242b97fa85e2-cgu.00.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libstd-e730827ac25e8eb8.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/liballoc-85c3b697c47ff844.rlib[5](alloc-85c3b697c47ff844.alloc.2bb4af1538dd142e-cgu.2.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/liballoc-85c3b697c47ff844.rlib[4](alloc-85c3b697c47ff844.alloc.2bb4af1538dd142e-cgu.1.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/liballoc-85c3b697c47ff844.rlib[3](alloc-85c3b697c47ff844.alloc.2bb4af1538dd142e-cgu.0.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/liballoc-85c3b697c47ff844.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[18](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.15.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[17](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.14.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[16](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.13.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[15](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.12.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[14](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.11.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[13](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.10.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[12](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.09.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[11](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.08.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[10](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.07.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[9](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.06.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[8](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.05.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[7](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.04.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[6](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.03.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[5](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.02.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[4](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.01.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[3](core-8349edf93c8d5e6a.core.b68d8c8fce132d46-cgu.00.rcgu.o)': found architecture 'arm64e.old', required architecture 'arm64e'
ld: warning: ignoring file '$PROJECT/target/arm64e-apple-ios/debug/deps/libcore-8349edf93c8d5e6a.rlib[2](lib.rmeta)': found architecture 'arm64e.old', required architecture 'arm64e'
Undefined symbols for architecture arm64e:
"_main", referenced from:
<initial-undefines>
ld: symbol(s) not found for architecture arm64e
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
I feel fairly confident that we're passing the right arguments to the linker nowadays (this happens even with https://github.com/rust-lang/rust/pull/129369), so I suspect it's the object files that we generate that's the problem somehow, but I may be mistaken?
### Meta
`rustc +nightly --version --verbose`:
```
rustc 1.83.0-nightly (26b5599e4 2024-09-06)
binary: rustc
commit-hash: 26b5599e4d6ed2b45152c60493c1788c0a27533d
commit-date: 2024-09-06
host: aarch64-apple-darwin
release: 1.83.0-nightly
LLVM version: 19.1.0
```
Happens as far back as `+nightly-2023-11-22`, the day after [the PR introducing these](https://github.com/rust-lang/rust/pull/115526) merged, so it's definitely due to changes in Xcode, not because of a regression in `rustc`.
@rustbot label O-ios O-macos O-AArch64
CC target maintainer @arttet. | O-macos,O-ios,T-compiler,C-bug,O-AArch64 | low | Critical |
2,512,033,900 | terminal | Trying new `wt x-save foo` for snippets with Canary but `wt` is pointing towards wt-stable | ### Windows Terminal version
1.23.2501.0
### Windows build number
10.0.22635.0
### Other Software
_No response_
### Steps to reproduce
- Open up WT Canary
- Try to save a snippet with `wt x-save git status`
- You get an error because `wt` is pointing to stable version which does not have this feature yet
- I ran `wt --version` and saw that it was pointing to an older version (stable)
I eventually figured out I had to set my `wt` alias to Canary by stumbling upon this comment: https://github.com/microsoft/terminal/issues/17463#issuecomment-2183486775
Here is what my alias settings were without me ever touching them. I had WT Stable installed obviously with windows, then installed WT Preview at one point and then recently installed WT Canary.

### Expected Behavior
I am not sure. But at minimum this should be documented in the release notes and the original PR (since I looked there seeing if there was anything similar).
But ideally whatever terminal version you are in, I would expect / hope that `wt` would be "smart enough" to know to point to that specific terminal version (stable vs preview vs canary).
### Actual Behavior
You get an error and unless you know about the wt alias needing to be set you will think it is just a bug with the new snippet x-save feature (or at least I did for a long while until I figured it out). | Issue-Bug,Product-Terminal,Area-Remoting | low | Critical |
2,512,053,013 | godot | 2D Jiggle physics working in wrong direction | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8]
https://github.com/user-attachments/assets/2dd2c6f7-9804-47b4-90c1-a9b18903a5b1
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3050 Laptop GPU (NVIDIA; 32.0.15.6081) - 11th Gen Intel(R) Core(TM) i5-11300H @ 3.10GHz (8 Threads)
### Issue description
Jiggle simulation for Skeleton2D PhysicalBones seems to work in opposite direction. Instead of jiggly bones "dragging behind" they fall forward in the direction of motion for unknown reason, this renders jiggle physics practically useless in their current state as they deliver broken results. It's not immediately noticable if you're using the effect in some very subtle manner, but the moment you start to put anything in fast motion it's immediately clear that it is broken and shouldn't behave in this way. Rotational drag for PhysicalBones seems to work properly, but when you try to move the object containing jiggle bones those bones will fly in direction of movement instead of being dragged behind.
### Steps to reproduce
- Create a 2d scene
- Add a Skeleton2D
- add a few bones and parent them together in a hierarchy
- add a few physical bones and assign them to the previously created Skeleton2D bones parenting them together
- in Skeleton2D set up a Overwrite the RestPose, add a new SkeletonModificationStack2D to your Skeleton2D node and enable it
- add 3 modifications to the SkeletonModificationStack2D of "SkeletonModification2DJiggle" type and configure them in a way that will make their jiggling clearly visible and stable as you try to move the bones around
- you should notice that moving the parent Skeleton2D around will result in "reverse-pull" which is incorrect. As jiggle bones instead of being dragged around they move ahead of the object. Rotating the Skeleton2D node however should give completely normal anticipated results that are correct where jiggle bones actually do appear to get dragged behind instead of moving in the opposite direction like they do while being dragged around the scene
### Minimal reproduction project (MRP)
[2d-jiggle-error.zip](https://github.com/user-attachments/files/16919836/2d-jiggle-error.zip)
| bug,topic:physics | low | Critical |
2,512,053,076 | pytorch | Bug with "make latexpdf" | ### 📚 The doc issue
# Bug Description and Console Output
The following shell output shows the bug encountered when running "make latexpdf".
(PyTorchV2-docs) ➜ docs git:(main) pwd
/home/bulky/PyTorchContributing/pytorch/docs
(PyTorchV2-docs) ➜ docs git:(main) ls
build cpp libtorch.rst make.bat Makefile README.md requirements.txt source src
(PyTorchV2-docs) ➜ docs git:(main) make latexpdf
Traceback (most recent call last):
File "/home/bulky/PyTorchContributing/pytorch/docs/source/scripts/exportdb/generate_example_rst.py", line 190, in <module>
generate_rst()
File "/home/bulky/PyTorchContributing/pytorch/docs/source/scripts/exportdb/generate_example_rst.py", line 176, in generate_rst
doc_contents = generate_example_rst(example_case)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bulky/PyTorchContributing/pytorch/docs/source/scripts/exportdb/generate_example_rst.py", line 44, in generate_example_rst
if example_case.example_kwargs:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ExportCase' object has no attribute 'example_kwargs'. Did you mean: 'example_inputs'?
make: *** [Makefile:28: exportdb] Error 1
# Steps to Reproduce:
1. conda create --name PyTorchV2-docs
2. conda activate PyTorchV2-docs
3. conda install pip
4. conda install main::numpy
5. pip install -r requirements.txt
6. conda install pytorch
7. make latexpdf
### Suggest a potential alternative/fix
This issue is related to a documentation build error, which puts it in a grey area between a bug and documentation issue. First, it would be helpful to fix the build error in the documentation. In addition, the following minor changes would be helpful:
1. The documentation build currently requires numpy 1.x. Since most people likely have numpy 2.x installed, a short snippet suggesting how to work around this issue might be helpful for some users. For example, one could add the following snippet to the documentation build instructions:
Building the documentation requires numpy 1.x. For this reason, it is advised to create a separate anaconda3 environment for the documentation build process. For example, one could use the following process if the user is using anaconda3 as advised in the PyTorch developers documentation:
1. conda create --name PyTorchV2-docs
2. conda activate PyTorchV2-docs
3. conda install pip
4. conda install main::numpy
5. pip install -r requirements.txt
6. conda install pytorch
7. make latexpdf
Please note that step 4 installs numpy 1.26.4 and that step 6 installs pytorch 2.3.0 for a CPU only. Since this environment is only for the purpose of building documentation, installing pytorch with CUDA, ROCM, or other device code is not necessary.
2. If the bug in this issue description is not a bug, it would be helfpul to know how to configure one's system to ensure the documentation builds properly.
cc @svekars @brycebortree @tstatler | needs reproduction,module: docs,triaged | low | Critical |
2,512,055,801 | PowerToys | Modifier Keys messed up by MWB | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update, WinGet
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Mouse Without Borders process is running
hold any modifier key (Shift,Control,Alt,Windows)
Problems are fixed by either disabling MWB or killing subprocess
### ✔️ Expected Behavior
Modifier key will stay enabled
Ex. Open textbox, hold Shift+A, expect a repeating line of Capital A's
Ex. drag a window and hold shift, fancy zones appear to lock window
### ❌ Actual Behavior
Modifier keys stutter or disable
Ex. Open textbox, hold Shift+A, expect a repeating line of Capital A's only first few As are capital rest are lowercase (shift disabled)
Ex. drag a window and hold shift, fancy zones appear to lock window (Fancy Zones Stutter as if letting go of Shift key)
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,512,090,648 | ollama | [Feature request] compatibility with vm balloon ram | Hi, it looks like ollama is not compatibel with ballon ram inside of an VM, i wanted to run ollama inside of an balloon ram, but i realized that when i have balloon enabled ollama thinks that there is just as example 5GB Ram available out of the 15GB it could get, because they are not provisoned at the start time when ollama checks how mutch ram is available.
i think it would be aweseome to have an envionment variable to explicitly specify how mutch ram its allowed to take regardless how many there is at the time of starting. | feature request,linux,windows | low | Minor |
2,512,101,909 | TypeScript | Overload order affects assignability of abstract constructors overloaded with regular constructors in interfaces | ### 🔎 Search Terms
abstract constructors, constructor overloading, overload order, intersection, type aliases, interfaces, assignability
### 🕗 Version & Regression Information
- This seems like a bug
- This is the behavior in every version I tried since abstract constructors were added (4.2)
### ⏯ Playground Link
https://www.typescriptlang.org/play/?noUncheckedIndexedAccess=true&ts=5.7.0-dev.20240904#code/C4TwDgpgBAwgjFAvFAhgIwM7AE4oMbBQB2EA7lABQoBcUcAlEgHx0BQoksATEsWZTShdGiFl1asAlkWARsAM3zQAglAgAPWUQAmGWHAA03KAG8AvlAD0lqAFFs2APbZaMFESKPCKDBkkBzIlQg9CxcAig8RyIwgFcCZygOaGBHVGJogFpQnHxCKJiceNTsJPAIADopGTlFPGgAITVNCB09GC4jeFMLaygAeQBrCW0IPAAbFGxocYhvWmVWUYmpmbmoNFoGiT6AFQALCAxoVY2vfbV5eTHgSQA3CHGQVExcglYSchQKBiNP1Aowj+-DQP3owPIoOEEhQvDQVhsQ1Y8OQsL69icLlg7k83l8ARCIVe4Xy0TiCVKySSaVhniI2WJeUiZKKFLKkCq7HKUGUjIIADFJNgsLxugAybhczi8sJ5AAyPkIyA6UAl8BGY0m0ygs28fOAguFwAW+sNWCWmtOupesoICqwJttwHtwB2NgORxO2p8fkC6Fm1LU+AujkO2FYORJZqVNrezsVEf1Lt4kby0YkKNjUaFIvRDmcrhxXlQ+MCwSzTIK5JK7JSNIy9NTESrrJrySqTYNOZj8L6SORKaTioRA2GneTyF7iOGQA
### 💻 Code
```ts
type C1 = abstract new (a: 1) => 1
type C2 = new (a: 2) => 2
interface A extends C1, C2 {} // Error: Cannot assign an abstract constructor type to a non-abstract constructor type.
interface B extends C2, C1 {} // Ok
declare let a: A
declare let b: B
// These are both effectively abstract
new a(1), new a(2), new b(1), new b(2)
a = b // Ok
b = a // Error: Cannot assign an abstract constructor type to a non-abstract constructor type.
type AbstractFirst = C1 & C2
type AbstractLast = C2 & C1
declare let abstractFirst: AbstractFirst
declare let abstractLast: AbstractLast
// These are assignable to each oher
abstractFirst = abstractLast
abstractLast = abstractFirst
b = abstractFirst // Error: Cannot assign an abstract constructor type to a non-abstract constructor type.
abstractFirst = b // Ok
b = abstractLast // Ok
abstractLast = b // Ok
```
### 🙁 Actual behavior
It seems that interfaces with both abstract and regular constructor overloads are only assignable from types with non-abstract first constructor overload.
### 🙂 Expected behavior
I don't expect any difference here.
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Critical |
2,512,107,845 | ui | [bug]: Something went wrong creating a new Next.js project. Please try again. | ### Describe the bug
I've copied this initiation command line from the documentation:
`bunx --bun shadcn@latest init`
I got this error after the project name question:
`Something went wrong creating a new Next.js project. Please try again.`
### Affected component/components
Project Setup
### How to reproduce
I interrupted the init command the first time, and then I continuously got the error.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Macbook pro 2021 M1
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,512,130,179 | pytorch | No batching rule for aten::repeat_interleave.Tensor | ### 🐛 Describe the bug
Hi, there's no batching rule for `torch.repeat_interleave` and I'm filing an issue as requested.
I've attached a minimal reproducible script below,
```
import torch
from torch import Tensor
from torch.func import vmap
indices = torch.tensor([[4, 0, 8],
[2, 2, 8],
[2, 2, 8],
[4, 2, 6],
[2, 4, 6],
[2, 4, 6],
[2, 2, 8],
[0, 4, 8],
[2, 4, 6],
[4, 2, 6]])
def func(indices):
values = torch.arange(1, indices.shape[0] + 1)
expanded = torch.repeat_interleave(values, indices)
return expanded
output = vmap(func, in_dims=(0))(indices)
print('out: ',output)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.4.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1070 with Max-Q Design
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 9 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0+cu124
[pip3] torchaudio==2.4.0+cu124
[pip3] torchvision==0.19.0+cu124
[pip3] triton==3.0.0
[conda] Could not collect
```
```[tasklist]
### Tasks
```
cc @zou3519 @Chillee @samdow @kshitij12345 | triaged,actionable,module: vmap,module: functorch | low | Critical |
2,512,166,948 | vscode | Increased typing lag with many word separators | Steps to Reproduce:
1. Load VSCode version 1.86.0 or higher (1.93.0)
2. Fill a `Plain Text` file with 500,000+ word separators `editor.wordSeparators`
(any combination of these characters `` `~!@#$%^&*()-=+[{]}\|;:'",.<>/? ``)
`::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::`
4. Go to the end of the file and type any single `wordPattern` character (`a-zA-Z_` by default)
OR any of the language `triggerCharacters` (none for `Plain Text`)
OR just press ctrl+space to manually bring up intellisense
6. Watch as VSCode hangs for considerable time (and/or crashes)
7. Load up VSCode version 1.85.1 or lower
8. Repeat steps 2 & 3
9. Notice only a marginal 500-700ms lag spike

I found this to mainly cause problems with large JSON files as they contain a large amount of word separator characters `{}[]":,`
also notice that filling the file with non-word separators, doesn't cause any crashing `qwertyuiopasdfghjklzxcvbnm0123456789_ `
obviously 500,000 characters is a lot, but I would have never made this bug report if I never encounter it
and of course there's still large multi-second long lag spikes at only 10k chars etc
and noticeable stuttering at 1k chars
Does this issue occur when all extensions are disabled?: Yes
- VS Code Version: 1.93.0
- OS Version: Windows 11
| performance | low | Critical |
2,512,179,148 | material-ui | [material-ui][pigment-css] How to toggle between color modes | ### Steps to reproduce
Link to live example: (required) [codesandbox](https://codesandbox.io/p/devbox/elegant-andras-frnk5m?workspaceId=257d2a8f-3202-4ba8-a303-97d6c62a9cf6&layout=%257B%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522rootPanelGroup%2522%253A%257B%2522direction%2522%253A%2522horizontal%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522id%2522%253A%2522ROOT_LAYOUT%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522cm0glfrow00063589334nnnrz%2522%252C%2522sizes%2522%253A%255B70%252C30%255D%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522EDITOR%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522id%2522%253A%2522cm0glfrov00023589vk16l4ga%2522%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522SHELLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522id%2522%253A%2522cm0glfrov00043589rxgyfg41%2522%257D%255D%252C%2522sizes%2522%253A%255B100%255D%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522DEVTOOLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522id%2522%253A%2522cm0glfrow00053589z54c2l6v%2522%257D%255D%252C%2522sizes%2522%253A%255B100%255D%257D%255D%252C%2522sizes%2522%253A%255B50%252C50%255D%257D%252C%2522tabbedPanels%2522%253A%257B%2522cm0glfrov00023589vk16l4ga%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cm0glfrov00013589b6lziqi9%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522filepath%2522%253A%2522%252FREADME.md%2522%252C%2522state%2522%253A%2522IDLE%2522%257D%252C%257B%2522id%2522%253A%2522cm0sy68z3000k3585fkj0upgk%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522initialSelections%2522%253A%255B%257B%2522startLineNumber%2522%253A13%252C%2522startColumn%2522%253A36%252C%2522endLineNumber%2522%253A13%252C%2522endColumn%2522%253A36%257D%255D%252C%2522filepath%2522%253A%2522%252Fsrc%252Fapp%252Flayout.tsx%2522%252C%2522state%2522%253A%2522IDLE%2522%257D%255D%252C%2522id%2522%253A%2522cm0glfrov00023589vk16l4ga%2522%252C%2522activeTabId%2522%253A%2522cm0sy68z3000k3585fkj0upgk%2522%257D%252C%2522cm0glfrow00053589z54c2l6v%2522%253A%257B%2522id%2522%253A%2522cm0glfrow00053589z54c2l6v%2522%252C%2522activeTabId%2522%253A%2522cm0sy9su5000o356j3ry8a0dg%2522%252C%2522tabs%2522%253A%255B%257B%2522type%2522%253A%2522UNASSIGNED_PORT%2522%252C%2522port%2522%253A3000%252C%2522id%2522%253A%2522cm0glig4c00703589mimmppfr%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522path%2522%253A%2522%252F%2522%257D%252C%257B%2522type%2522%253A%2522UNASSIGNED_PORT%2522%252C%2522port%2522%253A3000%252C%2522id%2522%253A%2522cm0gm26oy000r35853whzlrw0%2522%252C%2522mode%2522%253A%2522permanent%2522%257D%252C%257B%2522type%2522%253A%2522UNASSIGNED_PORT%2522%252C%2522port%2522%253A3000%252C%2522id%2522%253A%2522cm0sy9su5000o356j3ry8a0dg%2522%252C%2522mode%2522%253A%2522permanent%2522%257D%255D%257D%252C%2522cm0glfrov00043589rxgyfg41%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cm0glfrov000335895awxdf6w%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522TASK_LOG%2522%252C%2522taskId%2522%253A%2522dev%2522%257D%255D%252C%2522id%2522%253A%2522cm0glfrov00043589rxgyfg41%2522%252C%2522activeTabId%2522%253A%2522cm0glfrov000335895awxdf6w%2522%257D%257D%252C%2522showDevtools%2522%253Atrue%252C%2522showShells%2522%253Atrue%252C%2522showSidebar%2522%253Atrue%252C%2522sidebarPanelSize%2522%253A15%257D)
Steps:
1. Deploy https://github.com/mui/material-ui/blob/master/examples/material-ui-pigment-css-nextjs-ts/src/app/page.tsx
2. Add getSelector: (colorScheme) => colorScheme ? `.theme-${colorScheme}` : ":root" to the theme file
3. Add a toggle button with logic like document.documentElement.classList.toggle("theme-light"); from the pigment css documentation
### Current behavior
<html> adds class="theme-light" or "theme-dark"
Color scheme does not change after button click
### Expected behavior
<html> adds class="theme-light" or "theme-dark"
Color scheme changes on button click
### Context
I'm trying to use mui v6 with pigment css with the ability to select a particular color scheme, overriding the system defaults.
You'll also notice I had to comment out a Grid component in the codesandbox example. The nextjs pigment css example does not currently render with <Grid><Grid>Grid/><Grid/> without changing to a client component.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Replicated in Chrome
System:
OS: Linux 6.1 Ubuntu 20.04.6 LTS (Focal Fossa)
Binaries:
Node: 20.12.1 - /home/codespace/nvm/current/bin/node
npm: 10.5.0 - /home/codespace/nvm/current/bin/npm
pnpm: 8.15.6 - /home/codespace/nvm/current/bin/pnpm
Browsers:
Chrome: Not Found
npmPackages:
@emotion/react: 11.13.3
@emotion/styled: 11.13.0
@mui/core-downloads-tracker: 6.0.2
@mui/material: 6.0.2 => 6.0.2
@mui/material-pigment-css: 6.0.2 => 6.0.2
@mui/private-theming: 6.0.2
@mui/styled-engine: 6.0.2
@mui/system: 6.0.2
@mui/types: 7.2.16
@mui/utils: 6.0.2
@pigment-css/nextjs-plugin: latest => 0.0.22
@pigment-css/react: 0.0.21
@pigment-css/unplugin: 0.0.22
@types/react: latest => 18.3.4
react: latest => 18.3.1
react-dom: latest => 18.3.1
typescript: latest => 5.5.4
```
</details>
**Search keywords**: pigment-css v6 | new feature,package: pigment-css | low | Major |
2,512,183,879 | PowerToys | PowerToys Run ran into an issue | ### Microsoft PowerToys version
0.81.1.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Always on Top
### Steps to reproduce
Version: 0.81.1.0
OS Version: Microsoft Windows NT 10.0.22631.0
IntPtr Length: 8
x64: True
Date: 9/8/2024 9:24:04 AM
Exception:
System.TypeInitializationException: The type initializer for 'Microsoft.PowerToys.Run.Plugin.Calculator.Main' threw an exception.
---> System.TypeInitializationException: The type initializer for 'Mages.Core.Runtime.Global' threw an exception.
---> System.IO.FileNotFoundException: Could not load file or assembly 'System.Runtime.Numerics, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The system cannot find the file specified.
File name: 'System.Runtime.Numerics, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
at Mages.Core.Runtime.Global..cctor()
--- End of inner exception stack trace ---
at Mages.Core.Runtime.GlobalScope..ctor(IDictionary`2 scope)
at Mages.Core.Engine..ctor(Configuration configuration)
at Microsoft.PowerToys.Run.Plugin.Calculator.CalculateEngine..ctor()
at Microsoft.PowerToys.Run.Plugin.Calculator.Main..cctor()
--- End of inner exception stack trace ---
at Microsoft.PowerToys.Run.Plugin.Calculator.Main.get_AdditionalOptions()
at PowerLauncher.SettingsReader.<>c.<GetDefaultPluginsSettings>b__14_0(PluginPair x)
at System.Linq.Enumerable.SelectListIterator`2.MoveNext()
at System.Linq.Enumerable.ToDictionary[TSource,TKey](IEnumerable`1 source, Func`2 keySelector, IEqualityComparer`1 comparer)
at PowerLauncher.SettingsReader.UpdateSettings(PowerLauncherSettings settings)
at PowerLauncher.SettingsReader..ctor(PowerToysRunSettings settings, ThemeManager themeManager)
at PowerLauncher.App.<>c__DisplayClass19_0.<OnStartup>b__0()
at Wox.Infrastructure.Stopwatch.Normal(String message, Action action)
at PowerLauncher.App.OnStartup(Object sender, StartupEventArgs e)
at System.Windows.Application.<.ctor>b__1_0(Object unused)
at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler)
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,512,191,135 | pytorch | Compiler is 2x Faster for Input Size 1 Compared to Sizes 2 and Above, Where Forward Pass Times Remain Consistent | ### 🐛 Describe the bug
In the model available at https://github.com/alita-moore/img-to-text, increasing the length of the input IDs beyond 1 causes the process to slow down by approximately 2x compared to when the length is 1. However, for input sizes of 2, 3, and higher, the processing speed remains constant regardless of further increases in input size. If the forward pass is not compiled, the speed remains consistent across all input sizes.
This behavior is identified in the following code which you can find in dev.py
```python
import torch._dynamo
import time
from torch.nn.attention import SDPBackend
torch._dynamo.reset()
compiled_model = get_model(config, "cuda")
compiled_model.decoder.model.setup_cache(
1,
config.decoder_args.max_seq_len,
config.encoder_args.max_output_patches,
device="cuda",
)
compiled_model.decoder.model = torch.compile(compiled_model.decoder.model, mode="max-autotune", fullgraph=True) # type: ignore
encoder_outputs = (
torch.randn(
1, config.encoder_args.max_output_patches, config.encoder_args.output_dimensions
)
.to("cuda")
.to(torch.bfloat16)
)
encoder_cache_pos = torch.arange(0, config.encoder_args.max_output_patches).to("cuda")
def run_test(n: int):
print(f"===={n}====")
for i in range(10):
with torch.inference_mode():
with torch.autocast("cuda", dtype=torch.bfloat16):
with torch.nn.attention.sdpa_kernel([SDPBackend.MATH]):
input_ids = torch.full(
(encoder_outputs.shape[0], n),
1,
dtype=torch.long,
device=encoder_outputs.device,
)
start = time.time()
cache_pos = torch.arange(0, n, device="cuda")
compiled_model.decoder.model(
input_ids=input_ids,
cache_pos=cache_pos,
encoder_outputs=encoder_outputs,
encoder_cache_pos=encoder_cache_pos,
use_encoder_cache=i != 0,
)
torch.cuda.synchronize()
print(time.time() - start)
run_test(1)
run_test(2)
run_test(3)
run_test(4)
run_test(5)
```
```
====1====
7.119003534317017
5.200806617736816
0.5054481029510498
0.004807710647583008
0.004517316818237305
0.00446319580078125
0.004462718963623047
0.0044858455657958984
0.004460811614990234
0.004508495330810547
====2====
11.30737042427063
10.24819278717041
0.7579021453857422
0.00656437873840332
0.0062541961669921875
0.006215333938598633
0.006233930587768555
0.006225109100341797
0.006213665008544922
0.0062372684478759766
====3====
0.010666608810424805
0.008718729019165039
0.5773739814758301
0.006607532501220703
0.006293773651123047
0.006253242492675781
0.006267070770263672
0.006255626678466797
0.00626063346862793
0.006249666213989258
====4====
0.010656595230102539
0.008497953414916992
0.5782616138458252
0.006651163101196289
0.006323099136352539
0.006283283233642578
0.006294965744018555
0.006295680999755859
0.006290435791015625
0.006298065185546875
====5====
0.010953187942504883
0.008683443069458008
0.5847327709197998
0.006703376770019531
0.006349325180053711
0.0063059329986572266
0.006310224533081055
0.0063097476959228516
0.006322622299194336
0.006304502487182617
```
### Versions
PyTorch version: 2.4.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1014-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
BogoMIPS: 5300.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxruntime-gpu==1.19.0
[pip3] torch==2.4.0+cu124
[pip3] torchaudio==2.4.0+cu124
[pip3] torchvision==0.19.0+cu124
[pip3] triton==3.0.0
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @oulgen @jamesjwu @aorenste @anijain2305 @laithsakka | triaged,oncall: pt2,module: inductor,module: startup-tracing-compile | low | Critical |
2,512,223,630 | pytorch | [rocm] Unusable torch.ops.aten._scaled_dot_product_flash_attention_backward at 9.6TFLOPs | ### 🐛 Describe the bug
I am trying to debug LLama3 8B on MI300X and noticed that end to end throughput was at 83TFLOPs so i profiled it and noticed that `torch.ops.aten._scaled_dot_product_flash_attention_backward` takes up most of the time.
From tracing the strides, shapes of the inputs to sdpa_backward, I noticed that:
- H100: 183TFLOPs
- MI300X (nightly 2.5.0.dev20240907+rocm6.2): 9.6 TFLOPs
- MI300X (rocm6.2_ubuntu20.04_py3.9_pytorch_release_2.3.0): 19.9 TFLOPs
I have checked to confirm that it is not a hardware issue by running my torch.matmul and F.linear gemm benchmark and getting the expected result.
I have checked that this is the recommended way to use sdpa according to [this AMD blog
](https://rocm.blogs.amd.com/artificial-intelligence/flash-attention/README.html) . If there is another way that is more optimized, please let me know.
Below I have extracted the inputs using `DispatchLog` (aka `__torch_dispatch__`) and attached the reprod with the exact strides, shapes, etc. for the inputs into this op.
cc: @hongxiayang
```bash
Dispatch Log: aten._scaled_dot_product_flash_attention_backward.default(*('Tensor(shape=torch.Size([2, 32, 4096, 128]), dtype=torch.bfloat16, strides=(16777216, 128, 4096, 1), grad_fn=None)', 'Tensor(shape=torch.Size([2, 32, 4096, 128]), dtype=torch.bfloat16, strides=(16777216, 128, 4096, 1), grad_fn=<TransposeBackward0 object at 0x7ff748875ba0>)', 'Tensor(shape=torch.Size([2, 32, 4096, 128]), dtype=torch.bfloat16, strides=(16777216, 524288, 128, 1), grad_fn=<UnsafeViewBackward0 object at 0x7ff748875ba0>)', 'Tensor(shape=torch.Size([2, 32, 4096, 128]), dtype=torch.bfloat16, strides=(16777216, 524288, 128, 1), grad_fn=<UnsafeViewBackward0 object at 0x7ff748875ba0>)', 'Tensor(shape=torch.Size([2, 32, 4096, 128]), dtype=torch.bfloat16, strides=(16777216, 128, 4096, 1), grad_fn=<ScaledDotProductFlashAttentionBackward0 object at 0x7ff748875ba0>)', 'Tensor(shape=torch.Size([64, 4096]), dtype=torch.float32, strides=(4096, 1), grad_fn=None)', None, None, 4096, 4096, 0.0, True, 'Tensor(shape=torch.Size([]), dtype=torch.int64, strides=(), grad_fn=None)', 'Tensor(shape=torch.Size([]), dtype=torch.int64, strides=(), grad_fn=None)'), **{'scale': 0.08838834764831843})
```




```python
import torch
# SKIP OVER _summarize_statistics and do_bench to get to the main REPROD CORE LOGIC
# patch triton to have warmup & rep be count and not the time in ms
# https://github.com/OrenLeung/triton/blob/dd53ac7ddfb63a20eea044c0f4ad79b1281efc45/python/triton/testing.py
def _summarize_statistics(times, quantiles, return_mode):
import torch
if quantiles is not None:
ret = torch.quantile(times, torch.tensor(quantiles, dtype=torch.float)).tolist()
if len(ret) == 1:
ret = ret[0]
return ret
if return_mode == "all":
return times.tolist()
return getattr(torch, return_mode)(times).item()
# patch triton to have warmup & rep be count and not the time in ms
# https://github.com/OrenLeung/triton/blob/dd53ac7ddfb63a20eea044c0f4ad79b1281efc45/python/triton/testing.py
def do_bench(fn, warmup=25, rep=100, grad_to_none=None, quantiles=None, fast_flush=True, return_mode="mean"):
assert return_mode in ["min", "max", "mean", "median", "all"]
import torch
fn()
torch.cuda.synchronize()
cache_size = 256 * 1024 * 1024
if fast_flush:
cache = torch.empty(int(cache_size // 4), dtype=torch.int, device='cuda')
else:
cache = torch.empty(int(cache_size), dtype=torch.int8, device='cuda')
# compute number of warmup and repeat
n_warmup = warmup
n_repeat = rep
start_event = [torch.cuda.Event(enable_timing=True) for i in range(n_repeat)]
end_event = [torch.cuda.Event(enable_timing=True) for i in range(n_repeat)]
# Warm-up
for _ in range(n_warmup):
fn()
# Benchmark
for i in range(n_repeat):
# we don't want `fn` to accumulate gradient values
# if it contains a backward pass. So we clear the
# provided gradients
if grad_to_none is not None:
for x in grad_to_none:
x.grad = None
# we clear the L2 cache before each run
cache.zero_()
# record time of `fn`
start_event[i].record()
fn()
end_event[i].record()
# Record clocks
torch.cuda.synchronize()
times = torch.tensor([s.elapsed_time(e) for s, e in zip(start_event, end_event)], dtype=torch.float)
return _summarize_statistics(times, quantiles, return_mode)
# Check if CUDA is available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Shape and stride definitions from the log
shape_0 = [2, 32, 4096, 128] # For query, key, value, out tensors
shape_5 = [64, 4096] # For logsumexp
# Strides as provided in the dispatch log
stride_0_1_4 = [16777216, 128, 4096, 1] # For query, out_forward, grad_output
stride_2_3 = [16777216, 524288, 128, 1] # For key and value
stride_5 = [4096, 1] # For logsumexp
# Initialize tensors on the CUDA device
grad_output = torch.empty_strided(shape_0, stride_0_1_4, dtype=torch.bfloat16, device=device) # grad_output tensor
query = torch.empty_strided(shape_0, stride_0_1_4, dtype=torch.bfloat16, requires_grad=True, device=device) # query tensor
key = torch.empty_strided(shape_0, stride_2_3, dtype=torch.bfloat16, requires_grad=True, device=device) # key tensor
value = torch.empty_strided(shape_0, stride_2_3, dtype=torch.bfloat16, requires_grad=True, device=device) # value tensor
out_forward = torch.empty_strided(shape_0, stride_0_1_4, dtype=torch.bfloat16, requires_grad=True, device=device) # output from forward pass
logsumexp = torch.empty_strided(shape_5, stride_5, dtype=torch.float32, device=device) # logsumexp tensor
# Dummy tensors for Philox RNG seed and offset (provided as scalars in the dispatch log)
philox_seed = torch.tensor(0, dtype=torch.int64, device=device) # Philox seed tensor
philox_offset = torch.tensor(0, dtype=torch.int64, device=device) # Philox offset tensor
# Other scalar inputs
max_q = 4096
max_k = 4096
dropout_p = 0.0
is_causal = True
scale = 0.08838834764831843 # Provided scale from the log (1/sqrt(128))
# Call aten _scaled_dot_product_flash_attention_backward on CUDA
# 8 * 32 * 4096 // 32 * 4096
def run_sdpa_backward():
result = torch.ops.aten._scaled_dot_product_flash_attention_backward(
grad_output, # Gradient of the output
query, # Query tensor
key, # Key tensor
value, # Value tensor
out_forward, # Output of the forward pass
logsumexp, # Logsumexp tensor
None, # Cumulative sequence for query (None in the dispatch log)
None, # Cumulative sequence for key (None in the dispatch log)
max_q, # Maximum sequence length for query
max_k, # Maximum sequence length for key
dropout_p, # Dropout probability
is_causal, # Causal flag
philox_seed, # Philox RNG seed
philox_offset, # Philox RNG offset
scale=scale # Scaling factor
)
ms_sdpa_backward = do_bench(run_sdpa_backward, warmup=30, rep=200)
nHeads = 32
embedDim = 4096
seq_len = 4096
batch_size = 2
nFLOPS_sdpa_per_token = 8 * nHeads * embedDim // nHeads * seq_len
num_token = batch_size * seq_len
nFLOPS_sdpa = nFLOPS_sdpa_per_token * num_token
tflops_sdpa = nFLOPS_sdpa / ms_sdpa_backward * 1e-9
print(f"TFLOPS for _scaled_dot_product_flash_attention_backward: {tflops_sdpa}")
```
### Versions
## nightly
```bash
$ pip list | grep torch
pytorch-triton-rocm 3.0.0+757b6a61e7
torch 2.5.0.dev20240907+rocm6.2
torchaudio 2.5.0.dev20240907+rocm6.2
torchvision 0.20.0.dev20240907+rocm6.2
```
## rocm 6.2 docker image
```
torch 2.3.0a0+git96dd291
torchvision 0.18.0a0+68ba7ec
```
cc @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | module: performance,module: rocm,triaged,module: sdpa | low | Critical |
2,512,225,355 | transformers | Any plans on adding Flash Attention 3? | As title | Core: Modeling,Feature request,Flash Attention | low | Minor |
2,512,227,549 | rust | Access the non-public function from object created by default(). | The following code can successfully compile.
```
#![allow(unused)]
#[derive(Default)]
pub struct Bar2 { i: i32 }
impl Bar2 {
fn f(&self) -> bool { true }
}
pub mod foo {
#[derive(Default)]
pub struct Bar { i: ::Bar2 }
impl Bar {
// here this function ,
fn f(&self) -> bool { false }
}
impl ::std::ops::Deref for Bar {
type Target = ::Bar2;
fn deref(&self) -> &::Bar2 { &self.i }
}
}
fn main() {
let bar = foo::Bar::default();
print!("{}",bar.f()); // true
}
```
Here it print 'true', which I think it is the default implementation of function f.
The function `f` is not a public function?
And if I comment these lines of codes.
```
impl ::std::ops::Deref for Bar {
type Target = ::Bar2;
fn deref(&self) -> &::Bar2 { &self.i }
}
```
Then the program is not compilable. | T-lang,C-discussion | low | Minor |
2,512,229,432 | opencv | Android SDK Kotlin 2.0 internal compiler error | ### System Information
Was not able to find a better place to report the issue.
OpenCV Android c++ SDK
Internal compile error on Kotlin 2.0
Fixed with Kotlin 1.9
targeting android sdk 34, jvm version 17.
### Detailed description
> Task :OpenCV:compileDebugKotlin FAILED
e: org.jetbrains.kotlin.backend.common.BackendException: Backend Internal error: Exception during IR lowering
File being compiled: C:/Users/kamja/AndroidStudioProjects/KamCam3/OpenCV/java/src/org/opencv/core/MatAt.kt
The root cause java.lang.RuntimeException was thrown at: org.jetbrains.kotlin.backend.jvm.codegen.FunctionCodegen.generate(FunctionCodegen.kt:47)
at org.jetbrains.kotlin.backend.common.CodegenUtil.reportBackendException(CodegenUtil.kt:253)
at org.jetbrains.kotlin.backend.common.CodegenUtil.reportBackendException$default(CodegenUtil.kt:236)
at org.jetbrains.kotlin.backend.common.phaser.PerformByIrFilePhase.invokeSequential(performByIrFile.kt:65)
at org.jetbrains.kotlin.backend.common.phaser.PerformByIrFilePhase.invoke(performByIrFile.kt:52)
at org.jetbrains.kotlin.backend.common.phaser.PerformByIrFilePhase.invoke(performByIrFile.kt:38)
at org.jetbrains.kotlin.backend.common.phaser.NamedCompilerPhase.phaseBody(CompilerPhase.kt:166)
at org.jetbrains.kotlin.backend.common.phaser.AbstractNamedCompilerPhase.invoke(CompilerPhase.kt:113)
at org.jetbrains.kotlin.backend.common.phaser.CompositePhase.invoke(PhaseBuilders.kt:27)
at org.jetbrains.kotlin.backend.common.phaser.CompositePhase.invoke(PhaseBuilders.kt:14)
at org.jetbrains.kotlin.backend.common.phaser.NamedCompilerPhase.phaseBody(CompilerPhase.kt:166)
at org.jetbrains.kotlin.backend.common.phaser.AbstractNamedCompilerPhase.invoke(CompilerPhase.kt:113)
at org.jetbrains.kotlin.backend.common.phaser.CompilerPhaseKt.invokeToplevel(CompilerPhase.kt:62)
at org.jetbrains.kotlin.backend.jvm.JvmIrCodegenFactory.invokeCodegen(JvmIrCodegenFactory.kt:371)
at org.jetbrains.kotlin.codegen.CodegenFactory.generateModule(CodegenFactory.kt:47)
at org.jetbrains.kotlin.backend.jvm.JvmIrCodegenFactory.generateModuleInFrontendIRMode(JvmIrCodegenFactory.kt:433)
at org.jetbrains.kotlin.cli.jvm.compiler.pipeline.JvmCompilerPipelineKt.generateCodeFromIr(jvmCompilerPipeline.kt:246)
at org.jetbrains.kotlin.cli.jvm.compiler.pipeline.JvmCompilerPipelineKt.compileModulesUsingFrontendIrAndLightTree(jvmCompilerPipeline.kt:142)
at org.jetbrains.kotlin.cli.jvm.K2JVMCompiler.doExecute(K2JVMCompiler.kt:148)
at org.jetbrains.kotlin.cli.jvm.K2JVMCompiler.doExecute(K2JVMCompiler.kt:43)
at org.jetbrains.kotlin.cli.common.CLICompiler.execImpl(CLICompiler.kt:103)
at org.jetbrains.kotlin.cli.common.CLICompiler.execImpl(CLICompiler.kt:49)
at org.jetbrains.kotlin.cli.common.CLITool.exec(CLITool.kt:101)
at org.jetbrains.kotlin.incremental.IncrementalJvmCompilerRunner.runCompiler(IncrementalJvmCompilerRunner.kt:464)
at org.jetbrains.kotlin.incremental.IncrementalJvmCompilerRunner.runCompiler(IncrementalJvmCompilerRunner.kt:73)
at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.doCompile(IncrementalCompilerRunner.kt:506)
at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.compileImpl(IncrementalCompilerRunner.kt:423)
at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.compileNonIncrementally(IncrementalCompilerRunner.kt:301)
at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.compile(IncrementalCompilerRunner.kt:129)
at org.jetbrains.kotlin.daemon.CompileServiceImplBase.execIncrementalCompiler(CompileServiceImpl.kt:675)
at org.jetbrains.kotlin.daemon.CompileServiceImplBase.access$execIncrementalCompiler(CompileServiceImpl.kt:92)
at org.jetbrains.kotlin.daemon.CompileServiceImpl.compile(CompileServiceImpl.kt:1660)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at java.rmi/sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:360)
at java.rmi/sun.rmi.transport.Transport$1.run(Transport.java:200)
at java.rmi/sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
at java.rmi/sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:587)
at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:705)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:399)
at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:704)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.lang.RuntimeException: Exception while generating code for:
FUN name:setV2c visibility:public modality:OPEN <> ($this:org.opencv.core.AtableUByte, v:org.opencv.core.Mat.Tuple2<kotlin.UByte>) returnType:kotlin.Unit
overridden:
public abstract fun setV2c (v: @[FlexibleNullability] org.opencv.core.Mat.Tuple2<@[FlexibleNullability] T of org.opencv.core.Mat.Atable?>?): kotlin.Unit declared in org.opencv.core.Mat.Atable
$this: VALUE_PARAMETER name:<this> type:org.opencv.core.AtableUByte
VALUE_PARAMETER name:v index:0 type:org.opencv.core.Mat.Tuple2<kotlin.UByte>
BLOCK_BODY
VAR name:data type:kotlin.UByteArray [val]
CALL 'public final fun <unsafe-coerce> <T, R> (v: T of kotlin.jvm.internal.<unsafe-coerce>): R of kotlin.jvm.internal.<unsafe-coerce> declared in kotlin.jvm.internal' type=kotlin.UByteArray origin=null
<T>: kotlin.ByteArray
<R>: kotlin.UByteArray
v: BLOCK type=kotlin.ByteArray origin=null
VAR IR_TEMPORARY_VARIABLE name:tmp0 type:kotlin.ByteArray [val]
CONSTRUCTOR_CALL 'public constructor <init> (size: kotlin.Int) [primary] declared in kotlin.ByteArray' type=kotlin.ByteArray origin=null
size: CONST Int type=kotlin.Int value=2
CALL 'public final fun set (index: kotlin.Int, value: kotlin.Byte): kotlin.Unit [operator] declared in kotlin.ByteArray' type=kotlin.Unit origin=null
$this: GET_VAR 'val tmp0: kotlin.ByteArray [val] declared in org.opencv.core.AtableUByte.setV2c' type=kotlin.ByteArray origin=null
index: CONST Int type=kotlin.Int value=0
value: CALL 'public final fun <unsafe-coerce> <T, R> (v: T of kotlin.jvm.internal.<unsafe-coerce>): R of kotlin.jvm.internal.<unsafe-coerce> declared in kotlin.jvm.internal' type=kotlin.Byte origin=null
<T>: @[FlexibleNullability] kotlin.UByte?
<R>: kotlin.Byte
v: CALL 'public open fun get_0 (): @[FlexibleNullability] T of org.opencv.core.Mat.Tuple2? declared in org.opencv.core.Mat.Tuple2' type=@[FlexibleNullability] kotlin.UByte? origin=GET_PROPERTY
$this: GET_VAR 'v: org.opencv.core.Mat.Tuple2<kotlin.UByte> declared in org.opencv.core.AtableUByte.setV2c' type=org.opencv.core.Mat.Tuple2<kotlin.UByte> origin=null
CALL 'public final fun set (index: kotlin.Int, value: kotlin.Byte): kotlin.Unit [operator] declared in kotlin.ByteArray' type=kotlin.Unit origin=null
$this: GET_VAR 'val tmp0: kotlin.ByteArray [val] declared in org.opencv.core.AtableUByte.setV2c' type=kotlin.ByteArray origin=null
index: CONST Int type=kotlin.Int value=1
value: CALL 'public final fun <unsafe-coerce> <T, R> (v: T of kotlin.jvm.internal.<unsafe-coerce>): R of kotlin.jvm.internal.<unsafe-coerce> declared in kotlin.jvm.internal' type=kotlin.Byte origin=null
<T>: @[FlexibleNullability] kotlin.UByte?
<R>: kotlin.Byte
v: CALL 'public open fun get_1 (): @[FlexibleNullability] T of org.opencv.core.Mat.Tuple2? declared in org.opencv.core.Mat.Tuple2' type=@[FlexibleNullability] kotlin.UByte? origin=GET_PROPERTY
$this: GET_VAR 'v: org.opencv.core.Mat.Tuple2<kotlin.UByte> declared in org.opencv.core.AtableUByte.setV2c' type=org.opencv.core.Mat.Tuple2<kotlin.UByte> origin=null
GET_VAR 'val tmp0: kotlin.ByteArray [val] declared in org.opencv.core.AtableUByte.setV2c' type=kotlin.ByteArray origin=null
COMPOSITE type=kotlin.Unit origin=null
CALL 'public final fun put-7tiRaYo (indices: kotlin.IntArray, data: kotlin.UByteArray): kotlin.Int declared in org.opencv.core.MatAtKt' type=kotlin.Int origin=null
$receiver: GET_FIELD 'FIELD PROPERTY_BACKING_FIELD name:mat type:org.opencv.core.Mat visibility:private [final]' type=org.opencv.core.Mat origin=null
receiver: GET_VAR '<this>: org.opencv.core.AtableUByte declared in org.opencv.core.AtableUByte.setV2c' type=org.opencv.core.AtableUByte origin=null
indices: GET_FIELD 'FIELD PROPERTY_BACKING_FIELD name:indices type:kotlin.IntArray visibility:private [final]' type=kotlin.IntArray origin=null
receiver: GET_VAR '<this>: org.opencv.core.AtableUByte declared in org.opencv.core.AtableUByte.setV2c' type=org.opencv.core.AtableUByte origin=null
data: GET_VAR 'val data: kotlin.UByteArray [val] declared in org.opencv.core.AtableUByte.setV2c' type=kotlin.UByteArray origin=null
COMPOSITE type=kotlin.Unit origin=null
at org.jetbrains.kotlin.backend.jvm.codegen.FunctionCodegen.generate(FunctionCodegen.kt:47)
at org.jetbrains.kotlin.backend.jvm.codegen.FunctionCodegen.generate$default(FunctionCodegen.kt:40)
at org.jetbrains.kotlin.backend.jvm.codegen.ClassCodegen.generateMethodNode(ClassCodegen.kt:406)
at org.jetbrains.kotlin.backend.jvm.codegen.ClassCodegen.generateMethod(ClassCodegen.kt:423)
at org.jetbrains.kotlin.backend.jvm.codegen.ClassCodegen.generate(ClassCodegen.kt:168)
at org.jetbrains.kotlin.backend.jvm.FileCodegen.lower(JvmPhases.kt:39)
at org.jetbrains.kotlin.backend.common.phaser.PhaseFactoriesKt.createFilePhase$lambda$4(PhaseFactories.kt:71)
at org.jetbrains.kotlin.backend.common.phaser.PhaseBuildersKt$createSimpleNamedCompilerPhase$1.phaseBody(PhaseBuilders.kt:69)
at org.jetbrains.kotlin.backend.common.phaser.SimpleNamedCompilerPhase.phaseBody(CompilerPhase.kt:226)
at org.jetbrains.kotlin.backend.common.phaser.AbstractNamedCompilerPhase.invoke(CompilerPhase.kt:113)
at org.jetbrains.kotlin.backend.common.phaser.PerformByIrFilePhase.invokeSequential(performByIrFile.kt:62)
... 45 more
Caused by: java.lang.IllegalArgumentException: Inline class types should have the same representation: Lkotlin/UByte; != B
at org.jetbrains.kotlin.backend.jvm.intrinsics.UnsafeCoerce.invoke(UnsafeCoerce.kt:26)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitCall(ExpressionCodegen.kt:600)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitCall(ExpressionCodegen.kt:138)
at org.jetbrains.kotlin.ir.expressions.IrCall.accept(IrCall.kt:24)
at org.jetbrains.kotlin.backend.jvm.intrinsics.ArraySet.invoke(ArraySet.kt:32)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitCall(ExpressionCodegen.kt:600)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitCall(ExpressionCodegen.kt:138)
at org.jetbrains.kotlin.ir.expressions.IrCall.accept(IrCall.kt:24)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitStatementContainer(ExpressionCodegen.kt:579)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitContainerExpression(ExpressionCodegen.kt:593)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitContainerExpression(ExpressionCodegen.kt:138)
at org.jetbrains.kotlin.ir.visitors.IrElementVisitor$DefaultImpls.visitBlock(IrElementVisitor.kt:122)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitBlock(ExpressionCodegen.kt:413)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitBlock(ExpressionCodegen.kt:138)
at org.jetbrains.kotlin.ir.expressions.IrBlock.accept(IrBlock.kt:18)
at org.jetbrains.kotlin.backend.jvm.intrinsics.UnsafeCoerce.invoke(UnsafeCoerce.kt:30)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitCall(ExpressionCodegen.kt:600)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitCall(ExpressionCodegen.kt:138)
at org.jetbrains.kotlin.ir.expressions.IrCall.accept(IrCall.kt:24)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitVariable(ExpressionCodegen.kt:790)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitVariable(ExpressionCodegen.kt:138)
at org.jetbrains.kotlin.ir.declarations.IrVariable.accept(IrVariable.kt:36)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitStatementContainer(ExpressionCodegen.kt:579)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitBlockBody(ExpressionCodegen.kt:584)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.visitBlockBody(ExpressionCodegen.kt:138)
at org.jetbrains.kotlin.ir.expressions.IrBlockBody.accept(IrBlockBody.kt:20)
at org.jetbrains.kotlin.backend.jvm.codegen.ExpressionCodegen.generate(ExpressionCodegen.kt:240)
at org.jetbrains.kotlin.backend.jvm.codegen.FunctionCodegen.doGenerate(FunctionCodegen.kt:123)
at org.jetbrains.kotlin.backend.jvm.codegen.FunctionCodegen.generate(FunctionCodegen.kt:44)
... 55 more
Execution failed for task ':OpenCV:compileDebugKotlin'.
> A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
> Internal compiler error. See log for more details
### Steps to reproduce
Simply compile with Kotlin 2.0, its been 4 months.
### Issue submission checklist
- [X] I report the issue, it's not a question
- [ ] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: build/install,platform: android,community help requested | low | Critical |
2,512,233,604 | rust | [BUG] `llvm-cov` warning `mismatched data` when triple slash safety comment above `unsafe fn` | # bug
`llvm-cov` warning `mismatched data` when triple slash safety comment above `unsafe fn`
## reproduce
https://github.com/loynoir/reproduce-rust-130097
```rs
pub use bar::Bar;
mod bar {
pub struct Bar<T>(T);
impl Bar<i32> {
/// # Safety
///
/// be careful
pub const unsafe fn from_unchecked(value: i32) -> Self {
Bar(value)
}
}
}
```
## workaround
```rs
pub use bar::Bar;
mod bar {
pub struct Bar<T>(T);
impl Bar<i32> {
pub const unsafe fn from_unchecked(value: i32) -> Self {
Bar(value)
}
}
}
```
## related
`llvm-cov` warning `mismatched data` when double slash comment above `use`
https://github.com/rust-lang/rust/issues/130065
| A-LLVM,T-compiler,C-bug,A-code-coverage,S-has-mcve | low | Critical |
2,512,246,450 | flutter | [engine] snapshot delegate must check toImage/toImageSync calls against max texture size. | ### Steps to reproduce
1. Run app on iOS or Android
2. Click on "Random Drawing"
### Expected results
The app should continue working and the generation failing in case the exported image is too large.
### Actual results
The app crashes without warning or errors.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:ui';
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: Scaffold(
body: Center(
child: TextButton(
onPressed: _generate,
child: Text('Random drawing'),
),
),
),
);
}
}
void _generate() async {
const width = 13948;
const height = 12444;
final recorder = PictureRecorder();
final canvas = Canvas(recorder);
canvas.drawRect(const Rect.fromLTRB(0, 0, width + 0, height + 0),
Paint()..color = Colors.black45);
final picture = recorder.endRecording();
final image = picture.toImageSync(width, height);
picture.dispose();
await image.toByteData(format: ImageByteFormat.png);
}
```
</details>
### Screenshots or Video
_No response_
### Logs
<details open><summary>Logs</summary>
```console
Hardware Model: Mac14,2
Process: Runner [17801]
Path: /Users/USER/Library/Developer/CoreSimulator/Devices/CE2827ED-E7E0-4CD3-9DB4-D21344BD2A21/data/Containers/Bundle/Application/7B00E338-99C8-4339-AECB-546F07522499/Runner.app/Runner
Identifier: com.example.segFault
Version: 0.1.0 (0.1.0)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd_sim [6919]
Coalition: com.apple.CoreSimulator.SimDevice.CE2827ED-E7E0-4CD3-9DB4-D21344BD2A21 [1425]
Responsible Process: SimulatorTrampoline [710]
Date/Time: 2024-09-08 09:27:43.2678 +0200
Launch Time: 2024-09-08 09:25:35.0578 +0200
OS Version: macOS 14.6.1 (23G93)
Release Type: User
Report Version: 104
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x0000000128018000
Exception Codes: 0x0000000000000001, 0x0000000128018000
VM Region Info: 0x128018000 is not in any region. Bytes after previous region: 1 Bytes before following region: 1131855872
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
IOSurface 12743c000-128018000 [ 11.9M] rw-/rw- SM=PRV
---> GAP OF 0x4376c000 BYTES
Stack Guard 16b784000-16ef88000 [ 56.0M] ---/rwx SM=NUL
Termination Reason: SIGNAL 11 Segmentation fault: 11
Terminating Process: exc handler [17801]
Triggered by Thread: 5
Thread 0:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x100729170 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x10073a660 mach_msg2_internal + 76
2 libsystem_kernel.dylib 0x100731318 mach_msg_overwrite + 532
3 libsystem_kernel.dylib 0x1007294e8 mach_msg + 20
4 CoreFoundation 0x18040e684 __CFRunLoopServiceMachPort + 156
5 CoreFoundation 0x180408d64 __CFRunLoopRun + 1148
6 CoreFoundation 0x1804084d4 CFRunLoopRunSpecific + 572
7 GraphicsServices 0x18ef2aae4 GSEventRunModal + 160
8 UIKitCore 0x1853d0a28 -[UIApplication _run] + 868
9 UIKitCore 0x1853d46b0 UIApplicationMain + 124
10 UIKitCore 0x1848736a8 0x1847df000 + 607912
11 Runner 0x10067fd5c static UIApplicationDelegate.main() + 120
12 Runner 0x10067fcd4 static AppDelegate.$main() + 44
13 Runner 0x10067fdd8 main + 28 (AppDelegate.swift:5)
14 dyld_sim 0x100805544 start_sim + 20
15 dyld 0x100a92154 start + 2476
Thread 1:
0 libsystem_pthread.dylib 0x1007b65cc start_wqthread + 0
Thread 2:: com.apple.uikit.eventfetch-thread
0 libsystem_kernel.dylib 0x100729170 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x10073a660 mach_msg2_internal + 76
2 libsystem_kernel.dylib 0x100731318 mach_msg_overwrite + 532
3 libsystem_kernel.dylib 0x1007294e8 mach_msg + 20
4 CoreFoundation 0x18040e684 __CFRunLoopServiceMachPort + 156
5 CoreFoundation 0x180408d64 __CFRunLoopRun + 1148
6 CoreFoundation 0x1804084d4 CFRunLoopRunSpecific + 572
7 Foundation 0x180dd340c -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 208
8 Foundation 0x180dd3630 -[NSRunLoop(NSRunLoop) runUntilDate:] + 60
9 UIKitCore 0x1854773f8 -[UIEventFetcher threadMain] + 404
10 Foundation 0x180df9c64 __NSThread__start__ + 720
11 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
12 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 3:: io.flutter.1.ui
0 libsystem_kernel.dylib 0x100729170 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x10073a660 mach_msg2_internal + 76
2 libsystem_kernel.dylib 0x100731318 mach_msg_overwrite + 532
3 libsystem_kernel.dylib 0x1007294e8 mach_msg + 20
4 CoreFoundation 0x18040e684 __CFRunLoopServiceMachPort + 156
5 CoreFoundation 0x180408d64 __CFRunLoopRun + 1148
6 CoreFoundation 0x1804084d4 CFRunLoopRunSpecific + 572
7 Flutter 0x104813e28 fml::MessageLoopDarwin::Run() + 88 (message_loop_darwin.mm:51)
8 Flutter 0x10480cc20 fml::MessageLoopImpl::DoRun() + 40 (message_loop_impl.cc:94)
9 Flutter 0x104812a2c fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0::operator()() const + 164 (thread.cc:154) [inlined]
10 Flutter 0x104812a2c decltype(std::declval<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>()()) std::_fl::__invoke[abi:v15000]<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>(fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&) + 164 (invoke.h:403) [inlined]
11 Flutter 0x104812a2c void std::_fl::__invoke_void_return_wrapper<void, true>::__call<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>(fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&) + 164 (invoke.h:488) [inlined]
12 Flutter 0x104812a2c std::_fl::__function::__alloc_func<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0, std::_fl::allocator<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0>, void ()>::operator()[abi:v15000]() + 164 (function.h:185) [inlined]
13 Flutter 0x104812a2c std::_fl::__function::__func<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0, std::_fl::allocator<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0>, void ()>::operator()() + 184 (function.h:359)
14 Flutter 0x104812738 std::_fl::__function::__value_func<void ()>::operator()[abi:v15000]() const + 20 (function.h:512) [inlined]
15 Flutter 0x104812738 std::_fl::function<void ()>::operator()() const + 20 (function.h:1187) [inlined]
16 Flutter 0x104812738 fml::ThreadHandle::ThreadHandle(std::_fl::function<void ()>&&)::$_0::operator()(void*) const + 24 (thread.cc:76) [inlined]
17 Flutter 0x104812738 fml::ThreadHandle::ThreadHandle(std::_fl::function<void ()>&&)::$_0::__invoke(void*) + 36 (thread.cc:73)
18 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
19 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 4:: io.flutter.1.raster
0 libsystem_kernel.dylib 0x100729170 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x10073a660 mach_msg2_internal + 76
2 libsystem_kernel.dylib 0x100731318 mach_msg_overwrite + 532
3 libsystem_kernel.dylib 0x1007294e8 mach_msg + 20
4 CoreFoundation 0x18040e684 __CFRunLoopServiceMachPort + 156
5 CoreFoundation 0x180408d64 __CFRunLoopRun + 1148
6 CoreFoundation 0x1804084d4 CFRunLoopRunSpecific + 572
7 Flutter 0x104813e28 fml::MessageLoopDarwin::Run() + 88 (message_loop_darwin.mm:51)
8 Flutter 0x10480cc20 fml::MessageLoopImpl::DoRun() + 40 (message_loop_impl.cc:94)
9 Flutter 0x104812a2c fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0::operator()() const + 164 (thread.cc:154) [inlined]
10 Flutter 0x104812a2c decltype(std::declval<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>()()) std::_fl::__invoke[abi:v15000]<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>(fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&) + 164 (invoke.h:403) [inlined]
11 Flutter 0x104812a2c void std::_fl::__invoke_void_return_wrapper<void, true>::__call<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>(fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&) + 164 (invoke.h:488) [inlined]
12 Flutter 0x104812a2c std::_fl::__function::__alloc_func<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0, std::_fl::allocator<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0>, void ()>::operator()[abi:v15000]() + 164 (function.h:185) [inlined]
13 Flutter 0x104812a2c std::_fl::__function::__func<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0, std::_fl::allocator<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0>, void ()>::operator()() + 184 (function.h:359)
14 Flutter 0x104812738 std::_fl::__function::__value_func<void ()>::operator()[abi:v15000]() const + 20 (function.h:512) [inlined]
15 Flutter 0x104812738 std::_fl::function<void ()>::operator()() const + 20 (function.h:1187) [inlined]
16 Flutter 0x104812738 fml::ThreadHandle::ThreadHandle(std::_fl::function<void ()>&&)::$_0::operator()(void*) const + 24 (thread.cc:76) [inlined]
17 Flutter 0x104812738 fml::ThreadHandle::ThreadHandle(std::_fl::function<void ()>&&)::$_0::__invoke(void*) + 36 (thread.cc:73)
18 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
19 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 5 Crashed:: io.flutter.1.io
0 Flutter 0x1051055f0 unsigned int vector[4] skcms_private::baseline::load<unsigned int vector[4], char>(char const*) + 0 (Transform_inl.h:100) [inlined]
1 Flutter 0x1051055f0 skcms_private::baseline::Exec_load_8888_k(skcms_private::baseline::NoCtx, char const*, char*, float vector[4]&, float vector[4]&, float vector[4]&, float vector[4]&, int) + 4 (Transform_inl.h:878) [inlined]
2 Flutter 0x1051055f0 skcms_private::baseline::Exec_load_8888(skcms_private::baseline::StageList, void const**, char const*, char*, float vector[4], float vector[4], float vector[4], float vector[4], int) + 4 (Transform_inl.h:877)
3 Flutter 0x10510536c skcms_private::baseline::exec_stages(void (**)(skcms_private::baseline::StageList, void const**, char const*, char*, float vector[4], float vector[4], float vector[4], float vector[4], int), void const**, char const*, char*, int) + 44 (Transform_inl.h:1482) [inlined]
4 Flutter 0x10510536c skcms_private::baseline::run_program(skcms_private::Op const*, void const**, long, char const*, char*, int, unsigned long, unsigned long) + 172 (Transform_inl.h:1530)
5 Flutter 0x105104174 skcms_Transform + 3008 (skcms.cc:2807)
6 Flutter 0x104b8edec skcms(char*, char const*, int, skcms_PixelFormat, skcms_AlphaFormat, skcms_PixelFormat, skcms_AlphaFormat) + 40 (SkImageEncoderFns.h:38) [inlined]
7 Flutter 0x104b8edec transform_scanline_bgrA(char*, char const*, int, int) + 56 (SkImageEncoderFns.h:79)
8 Flutter 0x104b8e984 SkPngEncoderImpl::onEncodeRows(int) + 184 (SkPngEncoderImpl.cpp:462)
9 Flutter 0x104a2c478 SkEncoder::encodeRows(int) + 72 (SkEncoder.cpp:22)
10 Flutter 0x104b8eb9c SkPngEncoder::Encode(SkWStream*, SkPixmap const&, SkPngEncoder::Options const&) + 48 (SkPngEncoderImpl.cpp:510)
11 Flutter 0x104b8ec48 SkPngEncoder::Encode(GrDirectContext*, SkImage const*, SkPngEncoder::Options const&) + 124 (SkPngEncoderImpl.cpp:522)
12 Flutter 0x104c90384 flutter::EncodeImage(sk_sp<SkImage> const&, flutter::ImageByteFormat) + 136 (image_encoding.cc:215)
13 Flutter 0x104c90dd0 flutter::(anonymous namespace)::EncodeImageAndInvokeDataCallback(sk_sp<flutter::DlImage> const&, std::_fl::unique_ptr<tonic::DartPersistentValue, std::_fl::default_delete<tonic::DartPersistentValue>>, flutter::ImageByteFormat, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, fml::WeakPtr<GrDirectContext> const&, fml::TaskRunnerAffineWeakPtr<flutter::SnapshotDelegate> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&, bool)::$_1::operator()(fml::StatusOr<sk_sp<SkImage>> const&) const + 136 (image_encoding.cc:131)
14 Flutter 0x104cbb4a4 std::_fl::__function::__value_func<void (fml::StatusOr<sk_sp<SkImage>>)>::operator()[abi:v15000](fml::StatusOr<sk_sp<SkImage>>&&) const + 24 (function.h:512) [inlined]
15 Flutter 0x104cbb4a4 std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>::operator()(fml::StatusOr<sk_sp<SkImage>>) const + 24 (function.h:1187) [inlined]
16 Flutter 0x104cbb4a4 flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'()::operator()() const + 88 (image_encoding_impeller.cc:202) [inlined]
17 Flutter 0x104cbb4a4 decltype(std::declval<flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'()&>()()) std::_fl::__invoke[abi:v15000]<flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'()&>(flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'()&) + 88 (invoke.h:403) [inlined]
18 Flutter 0x104cbb4a4 void std::_fl::__invoke_void_return_wrapper<void, true>::__call<flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'()&>(flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'()&) + 88 (invoke.h:488) [inlined]
19 Flutter 0x104cbb4a4 std::_fl::__function::__alloc_func<flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'(), std::_fl::allocator<flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'()>, void ()>::operator()[abi:v15000]() + 88 (function.h:185) [inlined]
20 Flutter 0x104cbb4a4 std::_fl::__function::__func<flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'(), std::_fl::allocator<flutter::ImageEncodingImpeller::ConvertImageToRaster(sk_sp<flutter::DlImage> const&, std::_fl::function<void (fml::StatusOr<sk_sp<SkImage>>)>, fml::RefPtr<fml::TaskRunner> const&, fml::RefPtr<fml::TaskRunner> const&, std::_fl::shared_ptr<fml::SyncSwitch const> const&, std::_fl::shared_ptr<impeller::Context> const&)::$_0::operator()(fml::StatusOr<sk_sp<SkImage>>)::'lambda'()>, void ()>::operator()() + 100 (function.h:359)
21 Flutter 0x10480cd00 std::_fl::__function::__value_func<void ()>::operator()[abi:v15000]() const + 12 (function.h:512) [inlined]
22 Flutter 0x10480cd00 std::_fl::function<void ()>::operator()() const + 12 (function.h:1187) [inlined]
23 Flutter 0x10480cd00 fml::MessageLoopImpl::FlushTasks(fml::FlushType) + 156 (message_loop_impl.cc:126)
24 Flutter 0x104813cf8 fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) + 32 (message_loop_darwin.mm:85)
25 CoreFoundation 0x18040f548 __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ + 28
26 CoreFoundation 0x18040f204 __CFRunLoopDoTimer + 948
27 CoreFoundation 0x18040e8a0 __CFRunLoopDoTimers + 284
28 CoreFoundation 0x180408fec __CFRunLoopRun + 1796
29 CoreFoundation 0x1804084d4 CFRunLoopRunSpecific + 572
30 Flutter 0x104813e28 fml::MessageLoopDarwin::Run() + 88 (message_loop_darwin.mm:51)
31 Flutter 0x10480cc20 fml::MessageLoopImpl::DoRun() + 40 (message_loop_impl.cc:94)
32 Flutter 0x104812a2c fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0::operator()() const + 164 (thread.cc:154) [inlined]
33 Flutter 0x104812a2c decltype(std::declval<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>()()) std::_fl::__invoke[abi:v15000]<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>(fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&) + 164 (invoke.h:403) [inlined]
34 Flutter 0x104812a2c void std::_fl::__invoke_void_return_wrapper<void, true>::__call<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>(fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&) + 164 (invoke.h:488) [inlined]
35 Flutter 0x104812a2c std::_fl::__function::__alloc_func<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0, std::_fl::allocator<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0>, void ()>::operator()[abi:v15000]() + 164 (function.h:185) [inlined]
36 Flutter 0x104812a2c std::_fl::__function::__func<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0, std::_fl::allocator<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0>, void ()>::operator()() + 184 (function.h:359)
37 Flutter 0x104812738 std::_fl::__function::__value_func<void ()>::operator()[abi:v15000]() const + 20 (function.h:512) [inlined]
38 Flutter 0x104812738 std::_fl::function<void ()>::operator()() const + 20 (function.h:1187) [inlined]
39 Flutter 0x104812738 fml::ThreadHandle::ThreadHandle(std::_fl::function<void ()>&&)::$_0::operator()(void*) const + 24 (thread.cc:76) [inlined]
40 Flutter 0x104812738 fml::ThreadHandle::ThreadHandle(std::_fl::function<void ()>&&)::$_0::__invoke(void*) + 36 (thread.cc:73)
41 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
42 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 6:: io.flutter.1.profiler
0 libsystem_kernel.dylib 0x100729170 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x10073a660 mach_msg2_internal + 76
2 libsystem_kernel.dylib 0x100731318 mach_msg_overwrite + 532
3 libsystem_kernel.dylib 0x1007294e8 mach_msg + 20
4 CoreFoundation 0x18040e684 __CFRunLoopServiceMachPort + 156
5 CoreFoundation 0x180408d64 __CFRunLoopRun + 1148
6 CoreFoundation 0x1804084d4 CFRunLoopRunSpecific + 572
7 Flutter 0x104813e28 fml::MessageLoopDarwin::Run() + 88 (message_loop_darwin.mm:51)
8 Flutter 0x10480cc20 fml::MessageLoopImpl::DoRun() + 40 (message_loop_impl.cc:94)
9 Flutter 0x104812a2c fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0::operator()() const + 164 (thread.cc:154) [inlined]
10 Flutter 0x104812a2c decltype(std::declval<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>()()) std::_fl::__invoke[abi:v15000]<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>(fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&) + 164 (invoke.h:403) [inlined]
11 Flutter 0x104812a2c void std::_fl::__invoke_void_return_wrapper<void, true>::__call<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&>(fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0&) + 164 (invoke.h:488) [inlined]
12 Flutter 0x104812a2c std::_fl::__function::__alloc_func<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0, std::_fl::allocator<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0>, void ()>::operator()[abi:v15000]() + 164 (function.h:185) [inlined]
13 Flutter 0x104812a2c std::_fl::__function::__func<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0, std::_fl::allocator<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0>, void ()>::operator()() + 184 (function.h:359)
14 Flutter 0x104812738 std::_fl::__function::__value_func<void ()>::operator()[abi:v15000]() const + 20 (function.h:512) [inlined]
15 Flutter 0x104812738 std::_fl::function<void ()>::operator()() const + 20 (function.h:1187) [inlined]
16 Flutter 0x104812738 fml::ThreadHandle::ThreadHandle(std::_fl::function<void ()>&&)::$_0::operator()(void*) const + 24 (thread.cc:76) [inlined]
17 Flutter 0x104812738 fml::ThreadHandle::ThreadHandle(std::_fl::function<void ()>&&)::$_0::__invoke(void*) + 36 (thread.cc:73)
18 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
19 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 7:: io.worker.1
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9cc _pthread_cond_wait + 1216
2 Flutter 0x1047eca6c std::_fl::__libcpp_condvar_wait[abi:v15000](_opaque_pthread_cond_t*, _opaque_pthread_mutex_t*) + 4 (__threading_support:335) [inlined]
3 Flutter 0x1047eca6c std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 24 (condition_variable.cpp:46)
4 Flutter 0x104809050 void std::_fl::condition_variable::wait<fml::ConcurrentMessageLoop::WorkerMain()::$_0>(std::_fl::unique_lock<std::_fl::mutex>&, fml::ConcurrentMessageLoop::WorkerMain()::$_0) + 40 (__mutex_base:398) [inlined]
5 Flutter 0x104809050 fml::ConcurrentMessageLoop::WorkerMain() + 128 (concurrent_message_loop.cc:75)
6 Flutter 0x104809928 fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0::operator()() const + 136 (concurrent_message_loop.cc:20) [inlined]
7 Flutter 0x104809928 decltype(std::declval<fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>()()) std::_fl::__invoke[abi:v15000]<fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>(fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0&&) + 136 (invoke.h:403) [inlined]
8 Flutter 0x104809928 _ZNSt3_fl16__thread_executeB6v15000INS_10unique_ptrINS_15__thread_structENS_14default_deleteIS2_EEEEZN3fml21ConcurrentMessageLoopC1EmE3$_0JETpTnmJEEEvRNS_5tupleIJT_T0_DpT1_EEENS_15__tuple_indicesIJXspT2_EEEE + 136 (thread:284) [inlined]
9 Flutter 0x104809928 void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct>>, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>>(void*) + 184 (thread:295)
10 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
11 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 8:: io.worker.2
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9cc _pthread_cond_wait + 1216
2 Flutter 0x1047eca6c std::_fl::__libcpp_condvar_wait[abi:v15000](_opaque_pthread_cond_t*, _opaque_pthread_mutex_t*) + 4 (__threading_support:335) [inlined]
3 Flutter 0x1047eca6c std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 24 (condition_variable.cpp:46)
4 Flutter 0x104809050 void std::_fl::condition_variable::wait<fml::ConcurrentMessageLoop::WorkerMain()::$_0>(std::_fl::unique_lock<std::_fl::mutex>&, fml::ConcurrentMessageLoop::WorkerMain()::$_0) + 40 (__mutex_base:398) [inlined]
5 Flutter 0x104809050 fml::ConcurrentMessageLoop::WorkerMain() + 128 (concurrent_message_loop.cc:75)
6 Flutter 0x104809928 fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0::operator()() const + 136 (concurrent_message_loop.cc:20) [inlined]
7 Flutter 0x104809928 decltype(std::declval<fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>()()) std::_fl::__invoke[abi:v15000]<fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>(fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0&&) + 136 (invoke.h:403) [inlined]
8 Flutter 0x104809928 _ZNSt3_fl16__thread_executeB6v15000INS_10unique_ptrINS_15__thread_structENS_14default_deleteIS2_EEEEZN3fml21ConcurrentMessageLoopC1EmE3$_0JETpTnmJEEEvRNS_5tupleIJT_T0_DpT1_EEENS_15__tuple_indicesIJXspT2_EEEE + 136 (thread:284) [inlined]
9 Flutter 0x104809928 void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct>>, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>>(void*) + 184 (thread:295)
10 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
11 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 9:: io.worker.3
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9cc _pthread_cond_wait + 1216
2 Flutter 0x1047eca6c std::_fl::__libcpp_condvar_wait[abi:v15000](_opaque_pthread_cond_t*, _opaque_pthread_mutex_t*) + 4 (__threading_support:335) [inlined]
3 Flutter 0x1047eca6c std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 24 (condition_variable.cpp:46)
4 Flutter 0x104809050 void std::_fl::condition_variable::wait<fml::ConcurrentMessageLoop::WorkerMain()::$_0>(std::_fl::unique_lock<std::_fl::mutex>&, fml::ConcurrentMessageLoop::WorkerMain()::$_0) + 40 (__mutex_base:398) [inlined]
5 Flutter 0x104809050 fml::ConcurrentMessageLoop::WorkerMain() + 128 (concurrent_message_loop.cc:75)
6 Flutter 0x104809928 fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0::operator()() const + 136 (concurrent_message_loop.cc:20) [inlined]
7 Flutter 0x104809928 decltype(std::declval<fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>()()) std::_fl::__invoke[abi:v15000]<fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>(fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0&&) + 136 (invoke.h:403) [inlined]
8 Flutter 0x104809928 _ZNSt3_fl16__thread_executeB6v15000INS_10unique_ptrINS_15__thread_structENS_14default_deleteIS2_EEEEZN3fml21ConcurrentMessageLoopC1EmE3$_0JETpTnmJEEEvRNS_5tupleIJT_T0_DpT1_EEENS_15__tuple_indicesIJXspT2_EEEE + 136 (thread:284) [inlined]
9 Flutter 0x104809928 void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct>>, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>>(void*) + 184 (thread:295)
10 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
11 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 10:: io.worker.4
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9cc _pthread_cond_wait + 1216
2 Flutter 0x1047eca6c std::_fl::__libcpp_condvar_wait[abi:v15000](_opaque_pthread_cond_t*, _opaque_pthread_mutex_t*) + 4 (__threading_support:335) [inlined]
3 Flutter 0x1047eca6c std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 24 (condition_variable.cpp:46)
4 Flutter 0x104809050 void std::_fl::condition_variable::wait<fml::ConcurrentMessageLoop::WorkerMain()::$_0>(std::_fl::unique_lock<std::_fl::mutex>&, fml::ConcurrentMessageLoop::WorkerMain()::$_0) + 40 (__mutex_base:398) [inlined]
5 Flutter 0x104809050 fml::ConcurrentMessageLoop::WorkerMain() + 128 (concurrent_message_loop.cc:75)
6 Flutter 0x104809928 fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0::operator()() const + 136 (concurrent_message_loop.cc:20) [inlined]
7 Flutter 0x104809928 decltype(std::declval<fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>()()) std::_fl::__invoke[abi:v15000]<fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>(fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0&&) + 136 (invoke.h:403) [inlined]
8 Flutter 0x104809928 _ZNSt3_fl16__thread_executeB6v15000INS_10unique_ptrINS_15__thread_structENS_14default_deleteIS2_EEEEZN3fml21ConcurrentMessageLoopC1EmE3$_0JETpTnmJEEEvRNS_5tupleIJT_T0_DpT1_EEENS_15__tuple_indicesIJXspT2_EEEE + 136 (thread:284) [inlined]
9 Flutter 0x104809928 void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct>>, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0>>(void*) + 184 (thread:295)
10 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
11 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 11:: dart:io EventHandler
0 libsystem_kernel.dylib 0x10072ecec kevent + 8
1 Flutter 0x104d5cd94 dart::bin::EventHandlerImplementation::EventHandlerEntry(unsigned long) + 300 (eventhandler_macos.cc:459)
2 Flutter 0x104d78f3c dart::bin::ThreadStart(void*) + 88 (thread_macos.cc:91)
3 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
4 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 12:: Dart Profiler ThreadInterrupter
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9cc _pthread_cond_wait + 1216
2 Flutter 0x104ee6ee4 dart::Monitor::WaitMicros(long long) + 152 (os_thread_macos.cc:435)
3 Flutter 0x104f53968 dart::MonitorLocker::WaitMicros(long long) + 8 (lockers.h:181) [inlined]
4 Flutter 0x104f53968 dart::ThreadInterrupter::ThreadMain(unsigned long) + 324 (thread_interrupter.cc:170)
5 Flutter 0x104ee6650 dart::ThreadStart(void*) + 204 (os_thread_macos.cc:136)
6 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
7 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 13:: Dart Profiler SampleBlockProcessor
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9f8 _pthread_cond_wait + 1260
2 Flutter 0x104ee6ecc dart::Monitor::WaitMicros(long long) + 128 (os_thread_macos.cc:449)
3 Flutter 0x104eeb53c dart::MonitorLocker::WaitMicros(long long) + 16 (lockers.h:181) [inlined]
4 Flutter 0x104eeb53c dart::SampleBlockProcessor::ThreadMain(unsigned long) + 284 (profiler.cc:1875)
5 Flutter 0x104ee6650 dart::ThreadStart(void*) + 204 (os_thread_macos.cc:136)
6 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
7 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 14:: DartWorker
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9f8 _pthread_cond_wait + 1260
2 Flutter 0x104ee6ecc dart::Monitor::WaitMicros(long long) + 128 (os_thread_macos.cc:449)
3 Flutter 0x104e31e90 dart::MonitorLocker::WaitMicros(long long) + 8 (lockers.h:181) [inlined]
4 Flutter 0x104e31e90 dart::MutatorThreadPool::OnEnterIdleLocked(dart::MonitorLocker*) + 140 (isolate.cc:299)
5 Flutter 0x104f543ec dart::ThreadPool::WorkerLoop(dart::ThreadPool::Worker*) + 136 (thread_pool.cc:167)
6 Flutter 0x104f54700 dart::ThreadPool::Worker::Main(unsigned long) + 116 (thread_pool.cc:330)
7 Flutter 0x104ee6650 dart::ThreadStart(void*) + 204 (os_thread_macos.cc:136)
8 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
9 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 15:: DartWorker
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9f8 _pthread_cond_wait + 1260
2 Flutter 0x104ee6ecc dart::Monitor::WaitMicros(long long) + 128 (os_thread_macos.cc:449)
3 Flutter 0x104f5454c dart::MonitorLocker::WaitMicros(long long) + 8 (lockers.h:181) [inlined]
4 Flutter 0x104f5454c dart::ThreadPool::WorkerLoop(dart::ThreadPool::Worker*) + 488 (thread_pool.cc:183)
5 Flutter 0x104f54700 dart::ThreadPool::Worker::Main(unsigned long) + 116 (thread_pool.cc:330)
6 Flutter 0x104ee6650 dart::ThreadStart(void*) + 204 (os_thread_macos.cc:136)
7 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
8 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 16:
0 libsystem_pthread.dylib 0x1007b65cc start_wqthread + 0
Thread 17:: DartWorker
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9f8 _pthread_cond_wait + 1260
2 Flutter 0x104ee6ecc dart::Monitor::WaitMicros(long long) + 128 (os_thread_macos.cc:449)
3 Flutter 0x104f5454c dart::MonitorLocker::WaitMicros(long long) + 8 (lockers.h:181) [inlined]
4 Flutter 0x104f5454c dart::ThreadPool::WorkerLoop(dart::ThreadPool::Worker*) + 488 (thread_pool.cc:183)
5 Flutter 0x104f54700 dart::ThreadPool::Worker::Main(unsigned long) + 116 (thread_pool.cc:330)
6 Flutter 0x104ee6650 dart::ThreadStart(void*) + 204 (os_thread_macos.cc:136)
7 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
8 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 18:: DartWorker
0 libsystem_kernel.dylib 0x10072c670 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1007bb9f8 _pthread_cond_wait + 1260
2 Flutter 0x104ee6ecc dart::Monitor::WaitMicros(long long) + 128 (os_thread_macos.cc:449)
3 Flutter 0x104f5454c dart::MonitorLocker::WaitMicros(long long) + 8 (lockers.h:181) [inlined]
4 Flutter 0x104f5454c dart::ThreadPool::WorkerLoop(dart::ThreadPool::Worker*) + 488 (thread_pool.cc:183)
5 Flutter 0x104f54700 dart::ThreadPool::Worker::Main(unsigned long) + 116 (thread_pool.cc:330)
6 Flutter 0x104ee6650 dart::ThreadStart(void*) + 204 (os_thread_macos.cc:136)
7 libsystem_pthread.dylib 0x1007bb414 _pthread_start + 104
8 libsystem_pthread.dylib 0x1007b65e0 thread_start + 8
Thread 5 crashed with ARM Thread State (64-bit):
x0: 0x0000000170175438 x1: 0x0000000170175a48 x2: 0x00000001280173d0 x3: 0x000000010ce4c000
x4: 0x000000000000030c x5: 0x0000000000003370 x6: 0x0000000000000004 x7: 0x0000000000000004
x8: 0x0000000000000c30 x9: 0x00000000437f0000 x10: 0x000000010510901c x11: 0x0000000000000037
x12: 0x00000000434d594b x13: 0x0000000000007da9 x14: 0x0000000000007fff x15: 0x000000010ff80000
x16: 0x0000000000000080 x17: 0x0000000000000000 x18: 0x0000000000000000 x19: 0x0000000000000004
x20: 0x000000010ce4c000 x21: 0x0000000170175a48 x22: 0x0000000000000004 x23: 0x00000001280173d0
x24: 0x000000000000030c x25: 0x0000000000003370 x26: 0x0000000106954cf8 x27: 0x0000000000000002
x28: 0x0000000000000005 fp: 0x0000000170175580 lr: 0x000000010510536c
sp: 0x00000001701753e0 pc: 0x00000001051055f0 cpsr: 0x20001000
far: 0x0000000128018000 esr: 0x92000007 (Data Abort) byte read Translation fault
Binary Images:
0x100a8c000 - 0x100b17fff dyld (*) <f635824e-318b-3f0c-842c-c369737f2b68> /usr/lib/dyld
0x1009e4000 - 0x1009effff libobjc-trampolines.dylib (*) <c6ef2cc0-8ca9-3a69-a525-91bec719ddfc> /Volumes/VOLUME/*/libobjc-trampolines.dylib
0x104798000 - 0x10695ffff io.flutter.flutter (1.0) <4c4c44bc-5555-3144-a1b6-5cef7c5f815f> /Users/USER/Library/Developer/CoreSimulator/Devices/CE2827ED-E7E0-4CD3-9DB4-D21344BD2A21/data/Containers/Bundle/Application/7B00E338-99C8-4339-AECB-546F07522499/Runner.app/Frameworks/Flutter.framework/Flutter
0x100794000 - 0x10079bfff libsystem_platform.dylib (*) <3394e9ca-eb51-322d-a5eb-4d895d3b1c14> /usr/lib/system/libsystem_platform.dylib
0x100728000 - 0x100763fff libsystem_kernel.dylib (*) <0f9f96fe-6b1c-3253-a33a-c9e4a0c2a386> /usr/lib/system/libsystem_kernel.dylib
0x1007b4000 - 0x1007c3fff libsystem_pthread.dylib (*) <3df3256f-466e-37bc-b995-a5a9956e1415> /usr/lib/system/libsystem_pthread.dylib
0x10067c000 - 0x100683fff com.example.segFault (0.1.0) <53de7f45-2cd3-3cd9-889c-9c170f9f3c71> /Users/USER/Library/Developer/CoreSimulator/Devices/CE2827ED-E7E0-4CD3-9DB4-D21344BD2A21/data/Containers/Bundle/Application/7B00E338-99C8-4339-AECB-546F07522499/Runner.app/Runner
0x100804000 - 0x10084ffff dyld_sim (*) <f1d509a4-edf1-3668-b217-c6a2bd1fbef4> /Volumes/VOLUME/*/dyld_sim
0x180381000 - 0x180734fff com.apple.CoreFoundation (6.9) <6c40f9e5-bffa-3413-9e1c-a4f724ad56ba> /Volumes/VOLUME/*/CoreFoundation.framework/CoreFoundation
0x18ef27000 - 0x18ef2ffff com.apple.GraphicsServices (1.0) <b8bade4e-4da1-3e89-aadc-79d9356e07f1> /Volumes/VOLUME/*/GraphicsServices.framework/GraphicsServices
0x1847df000 - 0x186186fff com.apple.UIKitCore (1.0) <8d3f22bc-9dec-3601-b822-2b88624be742> /Volumes/VOLUME/*/UIKitCore.framework/UIKitCore
0x0 - 0xffffffffffffffff ??? (*) <00000000-0000-0000-0000-000000000000> ???
0x1807b4000 - 0x181264fff com.apple.Foundation (6.9) <3a54db51-8b3a-308d-9f9e-51474c4a8520> /Volumes/VOLUME/*/Foundation.framework/Foundation
0x1800f2000 - 0x18016dfff libsystem_c.dylib (*) <ce9466d4-2e24-3c03-a488-d86a828ecffe> /Volumes/VOLUME/*/libsystem_c.dylib
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.2, on macOS 14.6.1 23G93 darwin-arm64, locale en)
• Flutter version 3.24.2 on channel stable at /Users/USER/Development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4cf269e36d (4 days ago), 2024-09-03 14:30:00 -0700
• Engine revision a6bd3f1de1
• Dart version 3.5.2
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/USER/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.93.0-insider)
• VS Code at /Applications/Visual Studio Code - Insiders.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (5 available)
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,platform-android,platform-ios,engine,a: images,P2,c: fatal crash,team-engine,triaged-engine | low | Critical |
2,512,248,934 | storybook | [Bug]: WakeLock not working on firefox | ### Describe the bug
The WakeLock is not available on firefox
### Reproduction link
https://stackblitz.com/edit/github-bdxmxz?file=src%2Fstories%2FWake.tsx
### Reproduction steps
1. Go to above link on FIREFOX
2. Run the storybook
3. Observe text
### System
```bash
System:
OS: macOS 14.6.1
CPU: (12) arm64 Apple M3 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.15.1 - nodejs-20.15.1/bin/node
npm: 10.7.0 - nodejs-20.15.1/bin/npm <----- active
Browsers:
Chrome: 128.0.6613.120
Safari: 17.6
npmPackages:
@storybook/addon-essentials: ^8.2.6 => 8.2.6
@storybook/addon-interactions: ^8.2.6 => 8.2.6
@storybook/addon-links: ^8.2.6 => 8.2.6
@storybook/addon-onboarding: ^8.2.6 => 8.2.6
@storybook/addon-storysource: ^8.2.9 => 8.2.9
@storybook/blocks: ^8.2.6 => 8.2.6
@storybook/react: ^8.2.6 => 8.2.6
@storybook/react-vite: ^8.2.6 => 8.2.6
@storybook/test: ^8.2.6 => 8.2.6
@storybook/types: ^8.2.6 => 8.2.6
eslint-plugin-storybook: ^0.8.0 => 0.8.0
storybook: ^8.2.6 => 8.2.9
storybook-react-context: ^0.6.0 => 0.6.0
This is missing **FIREFOX**
Firefox: 129
```
### Additional context
The problem is the iframe of the preview.js
As firefox says [[0]] an iframe should allow Wakelock by adding:
```
<iframe src="https://b.example.com" allow="screen-wake-lock"/></iframe>
```
[0]: https://developer.mozilla.org/en-US/docs/Web/API/Screen_Wake_Lock_API#security_considerations | bug,needs triage | low | Critical |
2,512,265,473 | PowerToys | hotkeys for workspace not working | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
Hotkey to launch workspace is not working. Tried the default one and the edited one. not working. Launching for quick access works fine, though.
### ✔️ Expected Behavior
hotkey trigger launches workspaces
### ❌ Actual Behavior
nothing happens
[PowerToysReport_2024-09-08-10-27-04.zip](https://github.com/user-attachments/files/16921499/PowerToysReport_2024-09-08-10-27-04.zip)
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,512,274,422 | tauri | [bug] ios xcode build Archive: Command PhaseScriptExecution failed with a nonzero exit code | ### Describe the bug
After updating the tauri version recently, the build on xcode failed and PhaseScriptExecution occurred, and every yarn tauri ios build would overwrite the "signing&capabilities" I manually set. This may be a different issue, but it all comes from updating the tauri version. The last successful build was rc.1, and my xcode version is 15.4. I think there is a lack of certificate configuration matching xode in "tauri. ios. conf. json"
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.5.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.77.0 (aedd173a2 2024-03-17)
✔ cargo: 1.77.0 (3fe68eabf 2024-02-29)
✔ rustup: 1.27.0 (bbb9276d2 2024-03-08)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 18.20.4
- pnpm: 8.10.5
- yarn: 1.22.17
- npm: 8.19.4
[-] Packages
- tauri 🦀: 2.0.0-rc.8
- tauri-build 🦀: 2.0.0-rc.7
- wry 🦀: 0.41.0
- tao 🦀: 0.29.1
- @tauri-apps/api : 2.0.0-rc.4
- @tauri-apps/cli : 2.0.0-rc.8
```
### Stack trace
```text
yarn run v1.22.17
$ tauri ios xcode-script -v --platform iOS --sdk-root /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS17.5.sdk --framework-search-paths '/Users/lindongchen/Library/Developer/Xcode/DerivedData/ztm-ggjzuydginvgnvapqbjuhckazfxh/Build/Products/debug-iphoneos "."' --header-search-paths '/Users/lindongchen/Library/Developer/Xcode/DerivedData/ztm-ggjzuydginvgnvapqbjuhckazfxh/Build/Products/debug-iphoneos/include ' --gcc-preprocessor-definitions ' DEBUG=1' --configuration debug arm64
Debug [jsonrpsee-client] Connecting to target: Target { host: "127.0.0.1", host_header: "127.0.0.1:59087", _mode: Plain, path_and_query: "/", basic_auth: None }
Debug [jsonrpsee-client] Failed to connect to sockaddr: 127.0.0.1:59087
Debug [jsonrpsee-client] Connecting to target: Target { host: "127.0.0.1", host_header: "127.0.0.1:59087", _mode: Plain, path_and_query: "/", basic_auth: None }
Debug [jsonrpsee-client] Connecting to target: Target { host: "127.0.0.1", host_header: "127.0.0.1:59087", _mode: Plain, path_and_query: "/", basic_auth: None }
Debug [jsonrpsee-client] Connecting to target: Target { host: "127.0.0.1", host_header: "127.0.0.1:59087", _mode: Plain, path_and_query: "/", basic_auth: None }
Debug [jsonrpsee-client] Connecting to target: Target { host: "127.0.0.1", host_header: "127.0.0.1:59087", _mode: Plain, path_and_query: "/", basic_auth: None }
thread '<unnamed>' panicked at crates/tauri-cli/src/mobile/mod.rs:243:6:
failed to read CLI options: Error when opening the TCP socket: Connection refused (os error 61)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error Command failed with signal "SIGABRT".
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
**Command PhaseScriptExecution failed with a nonzero exit code**
```
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,512,304,531 | neovim | Configuration of built-in snippets | ### Problem
Currently there is no way to configure built-in `vim.snippet` behavior. This recently became a problem after introduction of not configurable "session only" mappings for snippet navigation (see #30198).
### Expected behavior
Some way for users to configure how certain aspects of snippets expansion is done. As per [this comment](https://github.com/neovim/neovim/issues/30198#issuecomment-2336620609), it might be a good idea to define the scope before deciding on this.
Here are some suggestions:
- Implement `vim.snippet.config()`, similar in spirit to `vim.diagnostic.config()`. Currently this seems like the best (future-proof, already familiar) design. See also [this comment](https://github.com/neovim/neovim/issues/30198#issuecomment-2336624245) for some counter-points.
- Allow `opts` in `vim.snippet.expand()`. This allows configuration on a call level which might be good for plugins that leverage it. However, it also might be too granular to the point of not solving the issue for built-in functionality (see [this comment](https://github.com/neovim/neovim/issues/30198#issuecomment-2336583394)).
- Implement `SnippetEnter` and `SnippetLeave` events (see #26449) for users to be able to perform actions only inside snippet session. Not *exactly* the snippet configuration, but at least *some* way for users to act on snippet expansion.
cc @MariaSolOs, @mfussenegger, @clason | enhancement,snippet | low | Minor |
2,512,328,491 | tensorflow | Gradients can't be computed for keras embeddings | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.15.1
### Custom code
Yes
### OS platform and distribution
Windows 11, Ubuntu 22.04LTS
### Mobile device
_No response_
### Python version
3.11.6
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
We have a problem in shap for quite some while now with embeddings (see [here](https://github.com/shap/shap/issues/3440)). Since we manipulate the graph to adjust the gradient calculation in order to produce shap values we need the layers to be backpropagatable. This does not seem the case for `tensorflow.keras.layers.Embedding` and we do not know a way around this.
(In a previous version there was the possibility to [manipulate](https://github.com/shap/shap/blob/master/shap/explainers/_deep/deep_tf.py#L412-L416) the [`_IsBackpropagatable` function](https://github.com/tensorflow/tensorflow/blob/v1.10.0/tensorflow/python/ops/gradients_impl.py#L293) but this is no longer possible)
In the example below one can see that the gradients just become `None` if the model contains an embedding layer.
Is there a way around this, so that we can calculate gradients for embeddings again?
### Standalone code to reproduce the issue
```python
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Load the IMDb dataset
max_features = 10000 # Only consider the top 10,000 words
maxlen = 100 # Only consider the first 100 words of each movie review
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=max_features)
X_train = pad_sequences(X_train, maxlen=maxlen)
X_test = pad_sequences(X_test, maxlen=maxlen)
# Build the model
model = models.Sequential()
embedding_layer = layers.Embedding(input_dim=max_features, output_dim=128, input_length=maxlen)
model.add(embedding_layer)
flat_layer = layers.Flatten()
model.add(flat_layer)
dense_layer = layers.Dense(1, activation='sigmoid')
model.add(dense_layer)
# Build the same model except for the embedding layer
new_model = models.Sequential()
new_model.add(flat_layer)
new_model.add(layers.Dense(1, activation="sigmoid"))
# Forward pass and gradient extraction
@tf.function
def get_gradients_model(inputs):
inputs = tf.cast(inputs, tf.float32) # Convert inputs to float32
with tf.GradientTape() as tape:
tape.watch(inputs) # Watch the input tensor to compute gradients w.r.t. it
predictions = model(inputs)
gradients = tape.gradient(predictions, inputs)
return predictions, gradients
@tf.function
def get_gradients_new_model(inputs):
inputs = tf.cast(inputs, tf.float32) # Convert inputs to float32
with tf.GradientTape() as tape:
tape.watch(inputs) # Watch the input tensor to compute gradients w.r.t. it
predictions = new_model(inputs)
gradients = tape.gradient(predictions, inputs)
return predictions, gradients
# Example usage
sample_input = X_train[:1] # Select a sample from the training set
sample_label = y_train[:1] # Corresponding label
predictions, gradients = get_gradients_model(sample_input)
predictions2, gradients2 = get_gradients_new_model(sample_input)
print("Gradients for model:", gradients)
print("Gradients for new_model:", gradients2)
```
### Relevant log output
Gradients for model: None
Gradients for new_model: tf.Tensor(
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0.]], shape=(1, 100), dtype=float32) | stat:awaiting tensorflower,type:bug,comp:ops,TF 2.15 | low | Critical |
2,512,333,257 | PowerToys | GPT-4o mini for Advanced Paste | ### Description of the new feature / enhancement
Upgrade Advance Paste's "Paste with AI" from GPT-3.5 Turbo to GPT-4o mini.
### Scenario when this would be used?
Advanced Paste's "Paste with AI" feature.
### Supporting information
GPT-4o Mini is significantly more advanced/knowledgeable and cheaper than 3.5 Turbo. | Needs-Triage | low | Minor |
2,512,347,875 | PowerToys | Move newly created windows to last known zone Chrome and PWA | ### Description of the new feature / enhancement
Treat Chrome main window and installed PWA as separate apps.
### Scenario when this would be used?
When using for example Chrome main window in one zone, and a PWA in other zone.
### Supporting information
The option to "move newly created windows to last known zone" is great, because it disables the default Windows behavior of opening windows in cascade.
However, this option treats the main Chrome and the installed PWAs as the same program. I know that internally they are, but it would be great if FancyZones could differentiate between them. I usually use Chrome in one zone, for example, and the Gmail PWA in another, so I always have to move it manually. | Product-FancyZones | low | Minor |
2,512,349,102 | tauri | [bug] `cargo tauri dev` fails if backend has dependency that can not compile to `wasm32-unknown-unknown` | ### Describe the bug
`cargo tauri dev` fails if the backend (i.e. `src-tauri`) crate has dependency that can not compile to `wasm32-unknown-unknown`. This results in errors such as
1. [`vswhom`: `C1056: cannot update the time date stamp field`](https://github.com/nabijaczleweli/vswhom-sys.rs/issues/2)
2. `clang`: `Failed to find tool. Is clang++ installed?`
This only seems to occur when the frontend is Rust based, and so needs to compile to `wasm32-unknown-unknown`.
3. `cl.exe`: `ToolExecError: Command "C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.41.34120\\bin\\HostX64\\x64\\cl.exe" ... with args cl.exe did not execute successfully`
From what I have gathered, it occurs when the `beforeDevCommand` or `beforeBuildCommand` in `tarui.conf.json` try to build the frontend.
e.g.
```json
"build": {
"beforeDevCommand": "trunk serve",
"beforeBuildCommand": "trunk build"
}
```
[Removing these commands](https://github.com/nabijaczleweli/vswhom-sys.rs/issues/2#issuecomment-2335172634) allows the crate to build, but obviously won't rebuild the frontend. However, adding them back in after does not result in the error, although this can be flakey.
I believe this is occurring because when the frontend builds, it is also building the backend dependencies. However, the frontend is trying to compile to a `wasm32-unknown-unknown` target, which the backend dependencies are not compatible with.
### Reproduction
1. Create a new Tauri app with a **Rust based frontend**. I've been using `cargo crate-tauri-app --rc`.
2. Add a non-`wasm32-unknown-unknown` compatible dependency to the backend `src-tauri` crate. I've been using [`zmq`](https://github.com/erickt/rust-zmq).
3. Build the app with `cargo tauri dev`, one of the errors above may occur, although this can be flakey.
To resolve:
4. Remove `"build": { "beforeDevCommand": "trunk serve" }` from `tauri.conf.json`.
5. Run `cargo tauri dev` again. This should build successfully, however will serve an old version of the app, if it exists.
6. Add `"build": { "beforeDevCommand": "trunk serve" }` back into `tauri.conf.json`.
7. Run `cargo tauri dev` again, which may now work successfully, however this can be flakey.
### Expected behavior
`cargo tauri dev` should succeed even if the backend (`src-tauri`) crate has non-`wasm32-unknown-unknown` compatible dependencies.
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
✔ WebView2: 128.0.2739.67
✔ MSVC: Visual Studio Build Tools 2022
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 18.17.1
- pnpm: 8.7.0
- npm: 10.8.3
[-] Packages
- tauri 🦀: 2.0.0-rc.10
- tauri-build 🦀: 2.0.0-rc.9
- wry 🦀: 0.43.1
- tao 🦀: 0.30.0
- tauri-cli 🦀: 2.0.0-rc.8
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.0-rc.3
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
```
### Stack trace
_No response_
### Additional context
See also
1. [This Discord discussion](https://discord.com/channels/616186924390023171/1269170486823096374)
2. [`vswhom-sys` issue](https://github.com/nabijaczleweli/vswhom-sys.rs/issues/2) | type: bug,status: needs triage | low | Critical |
2,512,349,474 | PowerToys | keyboard manager steals completely CTRL after few hours of running | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
keys CTRL+ F8-F12 assigned as multimedia shortcuts (play, next, prev, volume). Then work few hours with PC.
### ✔️ Expected Behavior
not affected keys
### ❌ Actual Behavior
affected CTRL, after few hours no application responds to (left) CTRL. Only power-toys created shortcut still works.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Major |
2,512,374,944 | yt-dlp | [youtube:tab] HTTP Error 404: Not Found for some YouTube channels due to YouTube weirdness | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
France
### Provide a description that is worded well enough to be understood
Some channels on YouTube redirects to old & unsupported links (i.e. https://youtube.com/username instead of https://youtube.com/@username). This bug is not specific to YT-DLP, in fact, opening a bugged YouTube channel directly from the link won't work and show the 404 Not Found error page because you got redirected to the wrong page. However, by searching on YouTube the channel name and clicking the the person's name, its the same exact URL but you are able to load the page without getting redirected. Seems really weird, IDK why it happens. I am not using any credentials inside YT-DLP, and the bug happens both in Brave browser while logged in and Brave browser while not logged in (inside a private window).
Example URL (won't work if you click on it, but search "ZimnyteGD" on YouTube and click on the profile to get the actual user page without the weird bug): https://www.youtube.com/@ZimnyteGD
The bug also happens when giving a specific tab, i.e. https://www.youtube.com/@ZimnyteGD**/videos**
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
py3 -m yt_dlp -f bestvideo+bestaudio https://youtube.com/@ZimnyteGD/videos --embed-subs --embed-thumbnail --embed-metadata --embed-chapters --embed-info-json -vU
[debug] Command-line config: ['-f', 'bestvideo+bestaudio', 'https://youtube.com/@ZimnyteGD/videos', '--embed-subs', '--embed-thumbnail', '--embed-metadata', '--embed-chapters', '--embed-info-json', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (pip)
[debug] Python 3.11.8 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.0.13 30 Jan 2024)
[debug] exe versions: ffmpeg N-116911-gc0666d8bed-20240907 (setts), ffprobe N-116911-gc0666d8bed-20240907
[debug] Optional libraries: Cryptodome-3.19.1, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.32.3, sqlite3-3.43.1, urllib3-1.26.18, websockets-13.0.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://youtube.com/@ZimnyteGD/videos
[youtube:tab] @ZimnyteGD/videos: Downloading webpage
WARNING: [youtube:tab] HTTP Error 404: Not Found. Retrying (1/3)...
[youtube:tab] @ZimnyteGD/videos: Downloading webpage
WARNING: [youtube:tab] HTTP Error 404: Not Found. Retrying (2/3)...
[youtube:tab] @ZimnyteGD/videos: Downloading webpage
WARNING: [youtube:tab] HTTP Error 404: Not Found. Retrying (3/3)...
[youtube:tab] @ZimnyteGD/videos: Downloading webpage
WARNING: [youtube:tab] Unable to download webpage: HTTP Error 404: Not Found (caused by <HTTPError 404: Not Found>). Giving up after 3 retries
[youtube:tab] @ZimnyteGD/videos: Downloading API parameters API JSON
ERROR: [youtube:tab] @ZimnyteGD: Failed to resolve url (does the playlist exist?)
File "C:\Users\hgsty\AppData\Roaming\Python\Python311\site-packages\yt_dlp\extractor\common.py", line 740, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hgsty\AppData\Roaming\Python\Python311\site-packages\yt_dlp\extractor\youtube.py", line 4817, in wrapper
info_dict = func(self, url, smuggled_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hgsty\AppData\Roaming\Python\Python311\site-packages\yt_dlp\extractor\youtube.py", line 6751, in _real_extract
data, ytcfg = self._extract_data(url, display_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hgsty\AppData\Roaming\Python\Python311\site-packages\yt_dlp\extractor\youtube.py", line 5540, in _extract_data
data = self._extract_tab_endpoint(url, item_id, ytcfg, fatal=fatal, default_client=default_client)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hgsty\AppData\Roaming\Python\Python311\site-packages\yt_dlp\extractor\youtube.py", line 5558, in _extract_tab_endpoint
raise ExtractorError(err_note, expected=True)
```
| external issue,site-bug,site:youtube | low | Critical |
2,512,391,654 | deno | SvelteKit x Cloudflare template cannot work | Version: Deno 2.0.0-rc.1+ce1d668 (canary, release, x86_64-pc-windows-msvc)
```sh
// Select `SvelteKit demo app`.
npm create cloudflare@latest -- my-svelte-app --framework=svelte
```
We can confirm that it does not work after running`deno install`.
```sh
my-svelte-app> deno install
Warning rollup-plugin-inject@3.0.2 is deprecated: This package has been deprecated and is no longer maintained. Please use @rollup/plugin-inject.
Warning glob@7.2.3 is deprecated: Glob versions prior to v9 are no longer supported
Warning sourcemap-codec@1.4.8 is deprecated: Please use @jridgewell/sourcemap-codec instead
Warning rimraf@2.7.1 is deprecated: Rimraf versions prior to v4 are no longer supported
Warning inflight@1.0.6 is deprecated: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
Warning Packages contained npm lifecycle scripts (preinstall/install/postinstall) that were not executed.
This may cause the packages to not work correctly. To run them, use the `--allow-scripts` flag with `deno cache` or `deno install`
(e.g. `deno cache --allow-scripts=pkg1,pkg2 <entrypoint>` or `deno install --allow-scripts=pkg1,pkg2`):
npm:@sveltejs/kit@2.5.26, npm:esbuild@0.21.5, npm:svelte-preprocess@5.1.4, npm:esbuild@0.17.19, npm:workerd@1.20240821.1
my-svelte-app> deno task dev
Task dev vite dev
error when starting dev server:
TypeError: undefined is not iterable (cannot read property Symbol(Symbol.iterator))
at Function.from (<anonymous>)
at Object.<anonymous> (file:///C:/Users/xxxx/my-svelte-app/node_modules/miniflare/dist/src/index.js:6744:64)
at Object.<anonymous> (file:///C:/Users/xxxx/my-svelte-app/node_modules/miniflare/dist/src/index.js:10197:4)
at Module._compile (node:module:735:34)
at Object.Module._extensions..js (node:module:756:11)
at Module.load (node:module:655:32)
at Function.Module._load (node:module:523:13)
at Module.require (node:module:674:19)
at require (node:module:800:16)
at Object.<anonymous> (file:///C:/Users/xxxx/my-svelte-app/node_modules/wrangler/wrangler-dist/cli.js:152430:24)
```
https://developers.cloudflare.com/pages/framework-guides/deploy-a-svelte-site/
There seems to be a problem with compatibility with Wrangler.
related: #17248 #17977 | bug,node compat | low | Critical |
2,512,392,455 | kubernetes | [Proposal] plugin-granular scheduling cache maintenance mechanism | ### What would you like to be added?
#### Background
Some scheduling plugins (especially out-of-tree plugins) maintain additional information through `EventHandler`.
For non-Pods types, this mechanism works fine. For Pods types, event dependencies mean that they can **NOT** perceive some Pods that **have been assumed but have not triggered binding**. (In theory, these assumed pods will exist in NodeInfo, but we cannot quickly identify them in the plugin, so we cannot make these `assumed Pods` affect the execution logic of the plugin)
A related issue: https://github.com/kubernetes-sigs/scheduler-plugins/issues/797
---
#### Proposal
A more reasonable way to obtain is to build a plugin-granular scheduling cache maintenance mechanism and touch the corresponding scheduling cache in the specific plugin. In short, we will maintain two types of information in the top-level scheduling main workflow:
- Respond to `external` events (Resource Add/Update/Delete Events) to maintain the latest cluster status
- Respond to `internal` temporary scheduling decisions (AssumePod/ForgetPod) to take effect before perceiving pod binding events
Based on this, the plugin will be able to perceive the impact of both `bound Pods` and `assumed Pods`.
All of this will be done under the premise of providing sufficient **scalability**. In addition, our practice has proved that reasonable split cache can also help speed up the plugin calculation logic through specific data preprocessing.
---
In fact, I have implemented similar mechanisms in other system before. I hope to have a broader discussion with the community on this issue to ensure that we can reach a consensus and ultimately eliminate such issues.
If approved, I will draft the KEP and build this mechanism under the premise of ensuring scalability.
Thanks a lot!
### Why is this needed?
Ensure plugins can take `assumed Pods` into consideration and avoid issues like https://github.com/kubernetes-sigs/scheduler-plugins/issues/797 | sig/scheduling,kind/feature,lifecycle/rotten,needs-triage | low | Major |
2,512,395,983 | godot | GLTF Animations get imported incorrectly, Reimport causes crash! | ### Tested versions
4.3.stable
### System information
windows 10
### Issue description
GLTF files with animations don't get imported correctly.
Here is an example of an animated gltf and how it should look:

And here is how it looks in Godot. Notice the legs aren't aligned to the body (use the importer window to preview it):

Also, here is how it looks in **4.2.2rc**, so I know it had been working fine in the past:

Edit: The problem appears to be related to the importer window. It displays incorrectly there, but fine when added to scene. However, reimporting the model causes Godot to crash!
### Steps to reproduce
Here is a zip file of the gltf model. Simply add it to a Godot project and check their animations, or use the provided MRP below.
[characterVid02_14.zip](https://github.com/user-attachments/files/16922403/characterVid02_14.zip)
### Minimal reproduction project (MRP)
[test-4.3.zip](https://github.com/user-attachments/files/16922404/test-4.3.zip)
| needs testing,topic:import,topic:animation | low | Critical |
2,512,402,942 | pytorch | Cannot build libtorch from source with NVHPC compilers - "Could NOT find Threads" | ### 🐛 Describe the bug
Tried to build `libtorch` from source with NVHPC compilers, and ran into problems with the cmake settings:
```
CMAKE_Fortran_COMPILER=/opt/nvidia/hpc_sdk/Linux_x86_64/24.5/comm_libs/mpi/bin/mpif90 CMAKE_C_COMPILER=/opt/nvidia/hpc_sdk/Linux_x86_64/24.5/comm_libs/mpi/bin/mpicc CMAKE_CXX_COMPILER=/opt/nvidia/hpc_sdk/Linux_x86_64/24.5/comm_libs/mpi/bin/mpic++ CMAKE_PREFIX_PATH="/opt/nvidia/hpc_sdk/Linux_x86_64/24.5/cuda/12.4" USE_CUDA=1 python tools/build_libtorch.py --cmake-only --rerun-cmake
```
results in [logs-issue.txt](https://github.com/user-attachments/files/16922462/logs-issue.txt)
```
[... see full output attached ...]
CMake Error at /opt/cmake/share/cmake-3.30/Modules/FindPackageHandleStandardArgs.cmake:233 (message):
Could NOT find Threads (missing: Threads_FOUND)
Call Stack (most recent call first):
/opt/cmake/share/cmake-3.30/Modules/FindPackageHandleStandardArgs.cmake:603 (_FPHSA_FAILURE_MESSAGE)
/opt/cmake/share/cmake-3.30/Modules/FindThreads.cmake:226 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
cmake/Modules/FindCUDAToolkit.cmake:947 (find_package)
cmake/public/cuda.cmake:59 (find_package)
cmake/Dependencies.cmake:44 (include)
CMakeLists.txt:863 (include)
```
I can't find anything in [associated compiler docs](https://docs.nvidia.com/hpc-sdk/archive/24.5/pdf/hpc24ref.pdf) about threads.
I've tried installing `libpthread ` with `sudo apt-get install libpthread-stubs0-dev` as well, and adding `/usr/lib/x86_64-linux-gnu/` to the `CMAKE_PREFIX_PATH` above, but that did not solve the problem either.
Are the settings incorrect, or is some library missing here?
The compiler versions used:
```
/opt/nvidia/hpc_sdk/Linux_x86_64/24.5/comm_libs/mpi/bin/mpic++ --version
nvc++ 24.5-1 64-bit target on x86-64 Linux -tp cascadelake
NVIDIA Compilers and Tools
Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
```
and `nvhpc-24-5` was installed using `apt-get`.
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.30.3
Libc version: glibc-2.31
Python version: 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
MIG 3g.40gb Device 0:
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel Xeon Processor (Cascadelake)
Stepping: 5
CPU MHz: 2593.906
BogoMIPS: 5187.81
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 128 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.0.2
[conda] Could not collect
```
cc @malfet @seemethere @ptrblck @msaroufim | module: build,module: cuda,triaged | low | Critical |
2,512,421,708 | You-Dont-Know-JS | types & grammar: cover the weird non-precedence of `??` operator in the presence of `||` / `&&` operators | https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Errors/Cant_use_nullish_coalescing_unparenthesized | for second edition | medium | Critical |
2,512,425,928 | rust | Confusing error message when using `min_specialization` with `dyn Trait` | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
#![feature(min_specialization)]
pub trait Foo {
fn foo(&self);
}
impl<T> Foo for T {
default fn foo(&self) {}
}
impl Foo for Box<dyn Foo> {
fn foo(&self) {}
}
```
This failed to compile with
```
error: cannot specialize on `'static` lifetime
--> src/lib.rs:15:1
|
15 | impl Foo for Box<dyn Foo> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^
```
This seems weird since there isn't a visible lifetime anywhere. The error should explain that the `'static` lifetime is implicit on `dyn Foo` and suggest adding a lifetime parameter. | A-diagnostics,T-compiler,A-specialization,requires-nightly,F-min_specialization | low | Critical |
2,512,432,050 | godot | Exported Node Array loses its contents if scene is reloaded during invalid script | ### Tested versions
Reproduced in 4.3-stable
### System information
Godot v4.3.stable - Windows 10.0.19044 - GLES3 (Compatibility) - NVIDIA GeForce RTX 2080 (NVIDIA; 32.0.15.5585) - Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz (12 Threads)
### Issue description
If a script exports an `Array[Node]`, its contents will be lost if the script has an error and the scene file is reloaded in the editor.
This can also happen when using external editor to write the script, but is likely the same issue.
I also tried the same with `float` and `Array[int]`, but everything gets remembered properly in these cases.
https://github.com/user-attachments/assets/9ac089d0-a795-4d44-8df3-0e335e62a169
### Steps to reproduce
If you download the MRP, you can skip step 1 and 2.
1. Create a script with `@export var breaks: Array[Node] = []` in it.
2. Assign script to a node in scene, give the array contents, save the scene.
3. Introduce a parse error into the script (e.g. add `func` to the end of the file) and save.
4. Reload the scene using "Scene -> Reload Saved Scene".
5. Remove the error introduced in 3. and observe the exported node array being empty now.
### Minimal reproduction project (MRP)
[node-array-scene-restore.zip](https://github.com/user-attachments/files/16922648/node-array-scene-restore.zip)
| bug,topic:editor,regression | low | Critical |
2,512,436,698 | godot | Emojis have white artifacts in Labels | ### Tested versions
Reproduced in v4.3-stable
### System information
Godot v4.3.stable - Windows 10.0.19044 - GLES3 (Compatibility) - NVIDIA GeForce RTX 2080 (NVIDIA; 32.0.15.5585) - Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz (12 Threads)
### Issue description
Emojis seem to have problems rendering on the borders, they have white lines as artifacts on the edges. It looks like alpha bleeding, but I am not sure.


Changing texture filtering mode to Nearest removes the artifacts:


### Steps to reproduce
Create a Label or Label3D and enter `♥` into it. Zoom in very closely.
### Minimal reproduction project (MRP)
[emoji-line-rendering.zip](https://github.com/user-attachments/files/16922679/emoji-line-rendering.zip)
| bug,topic:gui | low | Minor |
2,512,439,952 | godot | Importing a sprite crash deletes project | ### Tested versions
-happened in v4.2.2 (stable)
-haven't tried anything else, as this just randomly happened.
### System information
Windows 11
### Issue description
i tried to import a sprite, and godot crashed. when i opened it again, and opened my game everything was gone, the file for the project was gone, the sprite i triedto import was in the trash bin, and then opening project settings made it reimport everything.
### Steps to reproduce
i realy have no idea what happened or why. sorry.
### Minimal reproduction project (MRP)
once again, i dont have one, just randomly happened in a game i was working on. | bug,needs testing,topic:import | low | Critical |
2,512,451,607 | material-ui | Quirky layout with TextField and DataGrid inside Grid inside Dialog | ### Steps to reproduce
Link to live example: (required)
https://codesandbox.io/p/sandbox/quirky-layout-lm4ktv?file=%2Fsrc%2FApp.tsx%3A15%2C1-16%2C1
Steps:
1. be sure to run in a wide space, with some width to spare, as if the viewport is narrow the effect will not be evident.
2. click "show textfield"
### Current behavior
the grid automatically expands iteratively (not my code, lib's) until it reaches the limit of the dialog.
This doesn't happen if only the textfield or only the datagrid are displayed.
The behavior is triggered the moment that both are visible.
### Expected behavior
Width of the dialog should be max(width_of_textfield, width_of_datagrid)
### Context
i have dynamic forms inside dialogs.
i don't want to workaround capping the dialog's width, because the content is dynamically decided and i may have other components that need more space, so the dialog will widen legitimately.
### Your environment
can't run npx as it's codesandbox, but this is on my machine
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 14.6.1
Binaries:
Node: 18.17.1 - /usr/local/bin/node
npm: 9.6.7 - /usr/local/bin/npm
pnpm: 8.14.3 - /opt/homebrew/bin/pnpm
Browsers:
Chrome: Not Found
Edge: 128.0.2739.67
Safari: 17.6
npmPackages:
@emotion/react: 11.11.3
@emotion/styled: 11.11.0
@mui/base: 5.0.0-beta.40
@mui/core-downloads-tracker: 5.15.19
@mui/icons-material: 5.15.19
@mui/lab: 5.0.0-alpha.170
@mui/material: 5.15.19
@mui/private-theming: 5.15.14
@mui/styled-engine: 5.15.14
@mui/system: 5.15.15
@mui/types: 7.2.16
@mui/utils: 5.16.6
@mui/x-date-pickers: 6.20.1
@mui/x-tree-view: 6.17.0
@types/react: 18.2.48
react: 18.2.0
react-dom: 18.2.0
typescript: 4.9.5
```
</details>
**Search keywords**: dialog grid datagrid | component: dialog,component: text field,component: Grid,enhancement,customization: css | low | Minor |
2,512,464,828 | rust | ICE: Layout::compute: unexpected type `_` | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
fn main() {
let non_secure_function =
core::mem::transmute::<fn() -> _, extern "C-cmse-nonsecure-call" fn() -> _>;
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (12b26c13f 2024-09-07)
binary: rustc
commit-hash: 12b26c13fba25c9e1bc2fdf05f3c2dbb851c83de
commit-date: 2024-09-07
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
### Error output
```
<output>
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
error[E0658]: C-cmse-nonsecure-call ABI is experimental and subject to change
--> a.rs:3:50
|
3 | core::mem::transmute::<fn() -> _, extern "C-cmse-nonsecure-call" fn() -> _>;
| ^^^^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #81391 <https://github.com/rust-lang/rust/issues/81391> for more information
= help: add `#![feature(abi_c_cmse_nonsecure_call)]` to the crate attributes to enable
= note: this compiler was built on 2024-09-07; consider upgrading it if it is out of date
error: internal compiler error: compiler/rustc_ty_utils/src/layout.rs:678:13: Layout::compute: unexpected type `_`
thread 'rustc' panicked at compiler/rustc_ty_utils/src/layout.rs:678:13:
Box<dyn Any>
stack backtrace:
0: 0x794b2586ed7a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::had40ff6b0d363d8c
1: 0x794b260038d7 - core::fmt::write::h8896cd9c17192606
2: 0x794b26fd6991 - std::io::Write::write_fmt::h2513a98e60324138
3: 0x794b2586ebd2 - std::sys::backtrace::BacktraceLock::print::h97ff941b8ca3ca17
4: 0x794b258710f1 - std::panicking::default_hook::{{closure}}::h1240f9059a722e94
5: 0x794b25870f24 - std::panicking::default_hook::hef1ed95231316e5f
6: 0x794b24987a9f - std[484d8c24ec532d56]::panicking::update_hook::<alloc[eb1bfcc000f6131b]::boxed::Box<rustc_driver_impl[be7972101cd0935a]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x794b25871818 - std::panicking::rust_panic_with_hook::ha3e00c002dd0b838
8: 0x794b249c1761 - std[484d8c24ec532d56]::panicking::begin_panic::<rustc_errors[35018a33f26c22cc]::ExplicitBug>::{closure#0}
9: 0x794b249b4ef6 - std[484d8c24ec532d56]::sys::backtrace::__rust_end_short_backtrace::<std[484d8c24ec532d56]::panicking::begin_panic<rustc_errors[35018a33f26c22cc]::ExplicitBug>::{closure#0}, !>
10: 0x794b249b03e9 - std[484d8c24ec532d56]::panicking::begin_panic::<rustc_errors[35018a33f26c22cc]::ExplicitBug>
11: 0x794b249caa81 - <rustc_errors[35018a33f26c22cc]::diagnostic::BugAbort as rustc_errors[35018a33f26c22cc]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x794b24fe2884 - rustc_middle[da941fcb4ea689b5]::util::bug::opt_span_bug_fmt::<rustc_span[198bb2277a7734d7]::span_encoding::Span>::{closure#0}
13: 0x794b24fc8b1a - rustc_middle[da941fcb4ea689b5]::ty::context::tls::with_opt::<rustc_middle[da941fcb4ea689b5]::util::bug::opt_span_bug_fmt<rustc_span[198bb2277a7734d7]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x794b24fc89ab - rustc_middle[da941fcb4ea689b5]::ty::context::tls::with_context_opt::<rustc_middle[da941fcb4ea689b5]::ty::context::tls::with_opt<rustc_middle[da941fcb4ea689b5]::util::bug::opt_span_bug_fmt<rustc_span[198bb2277a7734d7]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x794b2257a260 - rustc_middle[da941fcb4ea689b5]::util::bug::bug_fmt
16: 0x794b269d9b3f - rustc_ty_utils[ce99518b9646809a]::layout::layout_of_uncached
17: 0x794b269d3fc6 - rustc_ty_utils[ce99518b9646809a]::layout::layout_of
18: 0x794b269d3f51 - rustc_query_impl[a6b34b6c656310e8]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a6b34b6c656310e8]::query_impl::layout_of::dynamic_query::{closure#2}::{closure#0}, rustc_middle[da941fcb4ea689b5]::query::erase::Erased<[u8; 16usize]>>
19: 0x794b269d31d3 - rustc_query_system[43367dc2cdf302c7]::query::plumbing::try_execute_query::<rustc_query_impl[a6b34b6c656310e8]::DynamicConfig<rustc_query_system[43367dc2cdf302c7]::query::caches::DefaultCache<rustc_middle[da941fcb4ea689b5]::ty::ParamEnvAnd<rustc_middle[da941fcb4ea689b5]::ty::Ty>, rustc_middle[da941fcb4ea689b5]::query::erase::Erased<[u8; 16usize]>>, false, true, false>, rustc_query_impl[a6b34b6c656310e8]::plumbing::QueryCtxt, false>
20: 0x794b269d2e6d - rustc_query_impl[a6b34b6c656310e8]::query_impl::layout_of::get_query_non_incr::__rust_end_short_backtrace
21: 0x794b26eed09c - rustc_middle[da941fcb4ea689b5]::query::plumbing::query_get_at::<rustc_query_system[43367dc2cdf302c7]::query::caches::DefaultCache<rustc_middle[da941fcb4ea689b5]::ty::ParamEnvAnd<rustc_middle[da941fcb4ea689b5]::ty::Ty>, rustc_middle[da941fcb4ea689b5]::query::erase::Erased<[u8; 16usize]>>>
22: 0x794b268ab6d8 - <dyn rustc_hir_analysis[6ffe0ea406cdfc5e]::hir_ty_lowering::HirTyLowerer>::lower_fn_ty
23: 0x794b268b174b - <dyn rustc_hir_analysis[6ffe0ea406cdfc5e]::hir_ty_lowering::HirTyLowerer>::lower_ty
24: 0x794b268bcaad - <<rustc_hir_typeck[e4ce35102d8b94c1]::fn_ctxt::FnCtxt>::instantiate_value_path::CtorGenericArgsCtxt as rustc_hir_analysis[6ffe0ea406cdfc5e]::hir_ty_lowering::GenericArgsLowerer>::provided_kind
25: 0x794b26a5e248 - <rustc_hir_typeck[e4ce35102d8b94c1]::fn_ctxt::FnCtxt>::instantiate_value_path
26: 0x794b26a51d26 - <rustc_hir_typeck[e4ce35102d8b94c1]::fn_ctxt::FnCtxt>::check_expr_path
27: 0x794b26c9d218 - <rustc_hir_typeck[e4ce35102d8b94c1]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
28: 0x794b26c9c084 - <rustc_hir_typeck[e4ce35102d8b94c1]::fn_ctxt::FnCtxt>::check_decl
29: 0x794b26c992df - <rustc_hir_typeck[e4ce35102d8b94c1]::fn_ctxt::FnCtxt>::check_block_with_expected
30: 0x794b26c9f853 - <rustc_hir_typeck[e4ce35102d8b94c1]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
31: 0x794b2611d5f9 - rustc_hir_typeck[e4ce35102d8b94c1]::check::check_fn
32: 0x794b26864032 - rustc_hir_typeck[e4ce35102d8b94c1]::typeck
33: 0x794b26863a65 - rustc_query_impl[a6b34b6c656310e8]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a6b34b6c656310e8]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[da941fcb4ea689b5]::query::erase::Erased<[u8; 8usize]>>
34: 0x794b261ba53a - rustc_query_system[43367dc2cdf302c7]::query::plumbing::try_execute_query::<rustc_query_impl[a6b34b6c656310e8]::DynamicConfig<rustc_query_system[43367dc2cdf302c7]::query::caches::VecCache<rustc_span[198bb2277a7734d7]::def_id::LocalDefId, rustc_middle[da941fcb4ea689b5]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[a6b34b6c656310e8]::plumbing::QueryCtxt, false>
35: 0x794b261b910d - rustc_query_impl[a6b34b6c656310e8]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
36: 0x794b261b8d87 - <rustc_middle[da941fcb4ea689b5]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[6ffe0ea406cdfc5e]::check_crate::{closure#4}>::{closure#0}
37: 0x794b261b6c20 - rustc_hir_analysis[6ffe0ea406cdfc5e]::check_crate
38: 0x794b26779cff - rustc_interface[d79dc78584d485d1]::passes::run_required_analyses
39: 0x794b26dcf19e - rustc_interface[d79dc78584d485d1]::passes::analysis
40: 0x794b26dcf171 - rustc_query_impl[a6b34b6c656310e8]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a6b34b6c656310e8]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[da941fcb4ea689b5]::query::erase::Erased<[u8; 1usize]>>
41: 0x794b26f9726e - rustc_query_system[43367dc2cdf302c7]::query::plumbing::try_execute_query::<rustc_query_impl[a6b34b6c656310e8]::DynamicConfig<rustc_query_system[43367dc2cdf302c7]::query::caches::SingleCache<rustc_middle[da941fcb4ea689b5]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[a6b34b6c656310e8]::plumbing::QueryCtxt, false>
42: 0x794b26f96fcf - rustc_query_impl[a6b34b6c656310e8]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
43: 0x794b26dc1a3c - rustc_interface[d79dc78584d485d1]::interface::run_compiler::<core[5eff1680495eda70]::result::Result<(), rustc_span[198bb2277a7734d7]::ErrorGuaranteed>, rustc_driver_impl[be7972101cd0935a]::run_compiler::{closure#0}>::{closure#1}
44: 0x794b26e93d90 - std[484d8c24ec532d56]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[d79dc78584d485d1]::util::run_in_thread_with_globals<rustc_interface[d79dc78584d485d1]::util::run_in_thread_pool_with_globals<rustc_interface[d79dc78584d485d1]::interface::run_compiler<core[5eff1680495eda70]::result::Result<(), rustc_span[198bb2277a7734d7]::ErrorGuaranteed>, rustc_driver_impl[be7972101cd0935a]::run_compiler::{closure#0}>::{closure#1}, core[5eff1680495eda70]::result::Result<(), rustc_span[198bb2277a7734d7]::ErrorGuaranteed>>::{closure#0}, core[5eff1680495eda70]::result::Result<(), rustc_span[198bb2277a7734d7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5eff1680495eda70]::result::Result<(), rustc_span[198bb2277a7734d7]::ErrorGuaranteed>>
45: 0x794b26e943fa - <<std[484d8c24ec532d56]::thread::Builder>::spawn_unchecked_<rustc_interface[d79dc78584d485d1]::util::run_in_thread_with_globals<rustc_interface[d79dc78584d485d1]::util::run_in_thread_pool_with_globals<rustc_interface[d79dc78584d485d1]::interface::run_compiler<core[5eff1680495eda70]::result::Result<(), rustc_span[198bb2277a7734d7]::ErrorGuaranteed>, rustc_driver_impl[be7972101cd0935a]::run_compiler::{closure#0}>::{closure#1}, core[5eff1680495eda70]::result::Result<(), rustc_span[198bb2277a7734d7]::ErrorGuaranteed>>::{closure#0}, core[5eff1680495eda70]::result::Result<(), rustc_span[198bb2277a7734d7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5eff1680495eda70]::result::Result<(), rustc_span[198bb2277a7734d7]::ErrorGuaranteed>>::{closure#1} as core[5eff1680495eda70]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
46: 0x794b26e947af - std::sys::pal::unix::thread::Thread::new::thread_start::h17c8afa67f401ea5
47: 0x794b2851839d - <unknown>
48: 0x794b2859d49c - <unknown>
49: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/tmp/im/2/rustc-ice-2024-09-08T16_22_30-798743.txt` to your bug report
query stack during panic:
panicked at /rustc/12b26c13fba25c9e1bc2fdf05f3c2dbb851c83de/compiler/rustc_type_ir/src/ty_kind.rs:797:17:
thread panicked while processing panic. aborting.
[1] 798743 IOT instruction rustc a.rs
```
</p>
</details>
| I-ICE,P-medium,T-compiler,regression-from-stable-to-stable,C-bug,A-layout,S-bug-has-test | low | Critical |
2,512,465,750 | rust | inconsistent exit code on ICE | Most of the time, rustc seems to exit with 101, but there are a couple of cases where rust rustc exits with 134
https://github.com/rust-lang/rust/issues/130015 for example shows exit code 101
but https://github.com/rust-lang/rust/issues/130104 shows exit code 134
this makes it harder for automated tooling to handle ICEs | T-compiler,C-bug | low | Minor |
2,512,485,744 | transformers | Track progress for VLMs refactoring | This issue tracks the progress on improving the handling and testing of Vision-Language Models. The main goals are to enhance/enable generation tests, handle other generation techniques like assisted decoding and ensure all models pass CI checks.
I already started working on it and merged/opened some PRs. This issue should help us track how much is left until VLMs are standardized from modeling code perspective.
- [x] **Enable Generation Tests for VLMs**
- [x] Merged a PR to calculate and expand text with "image" tokens in processing. VLMs currently add only one placeholder per visual. During the modeling phase, we expand the inputs to match the actual length of image embeddings. This approach limits the functionality of `generate()` , especially in enabling other cache formats and torch.compile and introduces hidden bugs. (https://github.com/huggingface/transformers/pull/30962)
- [ ] Verify that the addition of `processor_config.json` on the hub does not break existing functionality. Related discussion on slack: https://huggingface.slack.com/archives/C01N44FJDHT/p171957701917237). TL;DR: we can't avoid breaking BC but we still want the feature as it has so many benefits. So we'll just try again and hope that users don't use the old version anymore
- [x] **Fix Failing Edge Cases in Current VLMs**
- [x] Identified edge cases involving multi-image inputs and cache position preparation after merging the above PR (https://github.com/huggingface/transformers/pull/32907)
- [x] Introduce `num_image_tokens` attribute for specifying image sequence length. It ensures text expansion to the correct length based on the image backbone, otherwise we can't currently use the same processing class for different image backbones. https://github.com/huggingface/transformers/pull/33424
- [x] **Add Generation Tests to VLM Classes**
- [x] Already added in LLaVA-Onevision and Qwen2-VL (https://github.com/huggingface/transformers/pull/32673, https://github.com/huggingface/transformers/pull/33354)
- [x] Implement `GenerationTesterMixin` to include tests with both image and text inputs. Current tests accept only text as input. Enable for all models except BLIP ([draft available locally](https://github.com/huggingface/transformers/pull/33533))
- [x] Add tests for Idefics models and fix Mllama tests which are a bit different from llava style https://github.com/huggingface/transformers/pull/34062
- [x] **Special Case for BLIP**
- [x] Create a PR to adapt testing suite for BLIP's `main_input_name` which is not `input_ids` like in other model, but is `pixel_values`. Check that we don't cause red CI if we rely on model's `main_input_name` for tests (related or fixed by https://github.com/huggingface/transformers/pull/33685)
- [x] Remove (optionally) BLIP's custom generation logic and enable generation tests, that should also help us get rid of extra hacks for handling maximum length or `BOS` token in modeling code (https://github.com/huggingface/transformers/pull/34174)
- [ ] **Finalizing CI for VLMs**
- [x] Resolve `attention_Implementation` related failures to make CI fully happy for VLMs (https://github.com/huggingface/transformers/pull/32238)
- [ ] Ensure all VLMs pass all CI checks, including slow tests. Identify the reason and fix if there are failures (most probably failure is related to torch version, but need double check)
### Motivation
,
### Your contribution
. | WIP,Vision,Generation,Multimodal | low | Critical |
2,512,497,127 | godot | godot xr gets stuck at "Next Up" screen on steamvr. | ### Tested versions
reproducible in (v4.3.stable.custom_build [77dcf97d8] https://github.com/Zylann/godot_voxel) And (v4.3.stable.official [77dcf97d8])
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - Vulkan (Forward+) - dedicated Intel(R) Arc(TM) A770 Graphics (Intel Corporation; 32.0.101.5972) - AMD Ryzen 7 2700X Eight-Core Processor (16 Threads)
### Issue description
i am trying to stream my game to meta quest 2 with alvr but it gets stuck on "next up" screen on vr, other games such as vrchat works and whats more is, the same godot game i am trying to stream to headset works on linux , not sure if this is a bug specific to windows or there is some thing i am doing wrong.
Also when i move my headset i can see movement in game from my pc monitor but its stuck "next up" screen on of the vr headset.
This issue is not related to godot voxel extension by Zylann https://github.com/Zylann/godot_voxel it also accrues in official build.
### Steps to reproduce
Start ALVR and click "Launch SteamVr" button, connect the headset to alvr, then launch the game from godot editor.
### Minimal reproduction project (MRP)
[xrtest.zip](https://github.com/user-attachments/files/16923108/xrtest.zip)
| bug,topic:xr | low | Critical |
2,512,516,945 | ui | [bug]: date-picker.json not available - 404 error | ### Describe the bug
When trying to install date-picker component I receive 404 error. I have tried both using npx and plain browser and also tried different styles (new york or default). I can download other components (and their json definition files) via npx or browser.
### Affected component/components
date-picker
### How to reproduce
Go to
https://ui.shadcn.com/r/styles/new-york/date-picker.json -> returns 404
https://ui.shadcn.com/r/styles/default/date-picker.json -> returns 404
or
npx shadcn@latest add date-picker
returns
```
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
The component at https://ui.shadcn.com/r/styles/new-york/date-picker.json was not found.
It may not exist at the registry. Please make sure it is a valid component.
```
### Codesandbox/StackBlitz link
https://github.com/shadcn-ui/ui/issues/new?assignees=&labels=bug&projects=&template=bug_report.yml&title=%5Bbug%5D%3A+
### Logs
```bash
see above for error messages
```
### System Info
```bash
npx / web browser
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,512,517,191 | rust | Bogus "implementation of `<whatever trait you want>` is not general enough" with RPITIT + async | I tried this code:
```rust
use std::future::Future;
fn bug() {
let call_me = Wrap(CallMeImpl { value: "test" });
assert_send(async {
call_me.call().await;
});
}
pub fn assert_send<F>(_future: F)
where
F: Future + Send,
{
}
pub trait CallMe {
fn call(&self) -> impl Future<Output = ()> + Send;
}
struct Wrap<T>(T);
impl<S> CallMe for Wrap<S>
where
S: CallMe + Send,
{
// adding `+ Send` to this RPIT fixes the issue
fn call(&self) -> impl Future<Output = ()> {
self.0.call()
}
}
#[derive(Debug, Clone, Copy)]
pub struct CallMeImpl<T> {
value: T,
}
impl<T> CallMe for CallMeImpl<T>
where
// Can replace `Send` by `ToString`, `Clone`, whatever. When removing the
// `Send` bound, the compiler produces a higher-ranked lifetime error.
T: Send + 'static,
{
fn call(&self) -> impl Future<Output = ()> {
async {}
}
}
```
I expected to see this happen: Compile successfully
Instead, this happened:
```
error: implementation of `Send` is not general enough
--> src/lib.rs:6:5
|
6 | / assert_send(async {
7 | | call_me.call().await;
8 | | });
| |______^ implementation of `Send` is not general enough
|
= note: `Send` would have to be implemented for the type `&'0 str`, for any lifetime `'0`...
= note: ...but `Send` is actually implemented for the type `&'1 str`, for some specific lifetime `'1`
```
### Meta
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (9c01301c5 2024-09-05)
binary: rustc
commit-hash: 9c01301c52df5d2d7b6fe337707a74e011d68d6f
commit-date: 2024-09-05
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
happens with stable too.
I found a lot of similar-looking issues in https://github.com/rust-lang/rust/issues/110338, but none _quite_ like this one, so I chose to report a new one instead. Maybe I overlooked an existing bug report though, sorry if I did! | A-trait-system,T-compiler,A-impl-trait,C-bug,A-async-await | low | Critical |
2,512,527,321 | godot | "Unused varying" shader warning inconsisteny based on position of function using the varying | ### Tested versions
Godot v4.4.dev unknown - Windows 10.0.19045 - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 4050 Laptop GPU (NVIDIA; 32.0.15.6070) - 13th Gen Intel(R) Core(TM) i5-13500HX (20 Threads)
### System information
Godot v4.4.dev unknown - Windows 10.0.19045 - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 4050 Laptop GPU (NVIDIA; 32.0.15.6070) - 13th Gen Intel(R) Core(TM) i5-13500HX (20 Threads)
### Issue description
When the "do_stuff" function is placed below the vertex function, it displays a warning for unused varying.

When its placed above the vertex function there is no warning.

### Steps to reproduce
Make a shader with a varying, and then a function that uses the varying, and then adjust the position of the function using the varying and check the warnings.
### Minimal reproduction project (MRP)
```glsl
shader_type spatial;
varying float test;
void vertex() {
test = 1.0f;
}
float do_stuff() {
return test + 2.0f;
}
``` | bug,needs testing,topic:shaders | low | Minor |
2,512,532,955 | next.js | Wrong `error.tsx` matched using both parallel and dynamic routes | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/wonderful-dewdney-gp48mc
### To Reproduce
1. Start the dev server and open the preview: _Home error_ and _Slot error_ are shown correcly.
2. Navigate to `/page/one`: two _Home error_ are shown.
### Current vs. Expected behavior
#### Current
1. Start the dev server and open the preview: _Home error_ and _Slot error_ are shown correcly.
2. Navigate to `/page/one`: two _Home error_ are shown.
#### Expected
1. Start the dev server and open the preview: _Home error_ and _Slot error_ are shown correcly.
2. Navigate to `/page/one`: _Home error_ and _Slot error_ are shown correcly.
### Provide environment information
```bash
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.19
pnpm: 8.15.4
Relevant Packages:
next: 14.2.8 // Latest available version is detected (14.2.8).
eslint-config-next: 14.2.1
react: 18.2.0
react-dom: 18.2.0
typescript: 5.4.5
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
_No response_ | bug,Parallel & Intercepting Routes | low | Critical |
2,512,541,359 | neovim | Mapping a key to `<Nop>` prevents `on_key` callback from being run | ### Problem
If a key is mapped to `<Nop>` then `on_key` does not fire.
### Steps to reproduce
```
:lua vim.on_key(function(key, typed) if typed == 'M' then print('Pressed M') end end)
```
1. Press `M`; observe `"Pressed M"`
2. `:nnoremap M H`; Press `M`; observe `"Pressed M"`
3. `:nnoremap M <Nop>`; Press `M`; observe nothing.
### Expected behavior
Expect to see "Pressed M" even when M is mapped to `<Nop>`
### Neovim version (nvim -v)
0.10.1
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
MacOS
### Terminal name/version
kitty
### $TERM environment variable
tmux-256color
### Installation
homebrew | enhancement,input,lua,mappings,events | low | Minor |
2,512,542,332 | TypeScript | Provide option to not include `sourceMappingURL` in the generated `.js` file when `sourceMap` is `true` | ### 🔍 Search Terms
- sourceMappingURL
- sourceMap
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
There should be a compilerOption to disable inclusion of `sourceMappingURL` in the generated `.js` file when `sourceMap` is `true`.
### 📃 Motivating Example
When `sourceMap` is `true` in tsconfig.json, TypeScript compiler generates `.map` file together with `.js` file and also include `sourceMappingURL` in `.js` file.
The inclusion of `sourceMappingURL` in `.js` file is one way of linking generated code with sourcemap.
Another way is to use [SourceMap](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/SourceMap) header.
Right now, to make use of SourceMap header without including `sourceMappingURL` in `.js` file, I have to:
1. set `sourceMap` to `true`
2. run `npx tsc`
3. set `sourceMap` to `false`
4. run `npx tsc` once again.
There should provide a compilerOption to disable inclusion of `sourceMappingURL` in the generated `.js` file.
### 💻 Use Cases
1. What do you want to use this for?
I want to use this to use SourceMap header without getting warning in browser. The benefit of using SourceMap header is that the linking of generated file to map file can be controlled by the server without regenerating `.js` file.
2. What shortcomings exist with current approaches?
`sourceMappingURL` is included in `.js` file.
3. What workarounds are you using in the meantime?
1. set `sourceMap` to `true`
2. run `npx tsc`
3. set `sourceMap` to `false`
4. run `npx tsc` once again. | Suggestion,Awaiting More Feedback | low | Minor |
2,512,555,663 | godot | Editor crashes as of nVidia driver v555.85 on Windows 11 | ### Tested versions
- v4.2.2-stable_mono_win64
- v4.3-stable_mono_win64
### System information
Windows 11 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 4070
### Issue description
As of nVidia driver version 555.85, Godot editor crashes at any time, especially when quickly moving through drop-down menus or opening any file dialog box.
I've tried Godot v4.2.2 and 4.3 and both happen the same.
After changing the "Vulkan/OpenGL present method" to "Perfer native" in the nVidia settings, the crash problem does not seem to occur anymore. In nVidia driver version 552.44, this problem does not occur no matter which option is set.
https://github.com/user-attachments/assets/6edee96b-303e-4e85-a809-29981b6ad8aa

### Steps to reproduce
- Open a Godot project that is using Forward+ renderer
- Open any drop-down menu and quickly move your mouse over it
- Crash occurs
### Minimal reproduction project (MRP)
n/a | bug,topic:rendering,topic:thirdparty,needs testing,crash | low | Critical |
2,512,563,695 | flutter | Expose a flag for allowing to BackdropFilter filter over the ClipRect result | ### Use case
ImageFilter.blur with TileMode.decal typically produces smooth edges when applied to an entire widget. However, when we use a BackdropFilter and clip it over a specific region, we lose those smooth edges. This is because the clipping occurs after the blur effect is applied, resulting in sharp edges at the clipped boundaries.
Example for reproduction:
```dart
import 'package:flutter/material.dart';
import 'dart:ui';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Custom AppBar Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(),
);
}
}
class MyHomePage extends StatelessWidget {
final List<String> imageUrls = [
'https://picsum.photos/id/1018/800/600',
'https://picsum.photos/id/1015/800/600',
'https://picsum.photos/id/1019/800/600',
'https://picsum.photos/id/1016/800/600',
'https://picsum.photos/id/1021/800/600',
];
@override
Widget build(BuildContext context) {
return Scaffold(
body: Stack(
children: [
// Content - List of Image Cards
SafeArea(
child: ListView.builder(
padding: EdgeInsets.only(top: 80), // Space for the custom app bar
itemCount: imageUrls.length,
itemBuilder: (context, index) {
return Card(
margin: EdgeInsets.all(16),
elevation: 5,
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(15),
),
child: Column(
children: [
ClipRRect(
borderRadius: BorderRadius.vertical(top: Radius.circular(15)),
child: Image.network(
imageUrls[index],
fit: BoxFit.cover,
height: 200,
width: double.infinity,
),
),
Padding(
padding: EdgeInsets.all(16),
child: Text(
'Image ${index + 1}',
style: TextStyle(fontSize: 18, fontWeight: FontWeight.bold),
),
),
],
),
);
},
),
),
// Custom AppBar with BackdropFilter
Positioned(
top: 0,
left: 0,
right: 0,
child: ClipRect(
child: BackdropFilter(
filter: ImageFilter.blur(sigmaX: 10, sigmaY: 10, tileMode: TileMode.decal),
child: Container(
height: 100,
color: Colors.white.withOpacity(0.3),
child: SafeArea(
child: Center(
child: Text(
'Image Gallery',
style: TextStyle(
fontSize: 24,
fontWeight: FontWeight.bold,
color: Colors.black87,
),
),
),
),
),
),
),
),
],
),
);
}
}
```
The code uses BackdropFilter for a blurred app bar, but it creates sharp edges at the bottom. The desired effect is a smooth fade-out at the bottom edge, as shown in the right image, for a more pleasing transition between blurred and non-blurred areas.

### Proposal
I don't know what is the correct API because right now we have to wrap BackdropFilter with ClipRect and it makes sense not to have the smooth edges but since this is specific to only BackdropFilter we can probably add a flag in BackdropFilter widget to allow which operation happens first ?
| c: new feature,framework,c: proposal,P3,team-framework,triaged-framework | low | Major |
2,512,565,584 | godot | Animation Slice system doesn't auto-append filetype despite warning you about overwriting existing files | ### Tested versions
Occurs in 4.3
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated GeForce GTX 1060 6GB - Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz (12 Threads)
### Issue description
Re-Importing a model to overwrite an Animation Resource that's in an Animation Library fails, corrupting the model.
Errors:
```
Saving of animation failed: res://Visual/Characters/Player/Animations/Locomotion/Falling_Direction_1
Resource file not found: res://Visual/Characters/Player/Animations/Locomotion/Falling_Direction_1 (expected type: Animation)
scene/resources/animation_library.cpp:54 - Condition "p_animation.is_null()" is true. Returning: ERR_INVALID_PARAMETER
```
Distinct from https://github.com/godotengine/godot/issues/95744 in that Godot doesn't crash when this occurs, as well as occurring when the Animation Window is not open
The Animation Slice system appears to be _very_ unstable at the moment
### Steps to reproduce
-Have a pre-existing Animation Resource saved to a file
-Open Advanced Import Settings on a new model and create an Animation Slice which, when Saved to File, would overwrite the aforementioned Animation Resource.
-Reimport the model
-The Reimport process will fail, Godot will present the error, and the source model will be corrupted
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/16923635/mrp.zip)
All set up- select the Untitled2.glb, enter its Advanced Import Settings, click on Armature_Action, then click Reimport | bug,topic:import | low | Critical |
2,512,569,052 | pytorch | Inconsistent Tensor._version Behaviour with torch.compile() | ### 🐛 Describe the bug
When using `torch.compile()` with a function that performs an in-place operation on a tensor, the tensor’s `._version` property is not incremented as expected. This discrepancy does not occur when `torch.compile()` is removed.
```python
import torch
@torch.compile()
def version_bump_test(x):
x[...] = 0
x = torch.ones(4)
print("v0", x._version) # Expected: 0
version_bump_test(x)
print("v1", x._version) # Expected: 1, Actual: 0
print(x) # Expected: tensor([0., 0., 0., 0.])
```
I'm additionally seeing some other inconsistencies. When run uncompiled, this raises the classic `a leaf Variable that requires grad is being used in an in-place operation.` When compiled, no error is raised.
```python
import torch
@torch.compile()
def version_bump_test(x):
x[...] = 0
x = torch.ones(4, requires_grad = True)
version_bump_test(x)
```
I initially assumed that this was only an inconsistency with the undocumented python side `._version` and everything was fine internally, but the difference in behaviour shown in the second case is making me nervous. It would be useful to me to have confirmation that these issues are only cosmetic and that autograd's tracking of inplace mutation can be trusted in compiled code.
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] torch==2.4.0+cu121
[pip3] torchao==0.3.1
[pip3] torchaudio==2.4.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchtune==0.2.1
[pip3] torchvision==0.19.0+cu121
[conda] Could not collect
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @bdhirsh @chauhang @penguinwu @zou3519 | module: autograd,triaged,module: functionalization,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,512,572,768 | godot | MOUSE_MODE_CONFINED_HIDDEN doesn't work on the web platform | ### Tested versions
- Reproduced in: 4.3.stable & 4.2.2.stable.
### System information
Windows 10 - Chrome 128 - Godot 4.3.stable - Compatibility
### Issue description
My game is using `Input.mouse_mode = MOUSE_MODE_CONFINED_HIDDEN` because you're suposed to move your cursor a lot inside of it. The mode doesn't seem to work on web, it doesnt hide or confine the cursor.
I don't know if this is on pourpouse - it might be -but if it is, it's undocumented.
On windows (working correctly)

On web (not working)

Is there a way to fix this?
### Steps to reproduce
**How to test the MRP**
1. Play the project both on windows and the web
2. Open 'main.gd' file and uncomment line 7
3. Play again both on windows and the web (it won't work on the web)
### Minimal reproduction project (MRP)
[mouse_filter_test.zip](https://github.com/user-attachments/files/16923656/mouse_filter_test.zip)
| bug,platform:web,documentation,topic:input | low | Minor |
2,512,576,170 | rust | No backtrace from segfault handler after stack overflow in a proc macro on WSL | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
i have a presumably-extremely-broken proc macro which itself builds successfully but, when used, causes a segfault.
in `lib.rs`:
```Rust
use proc_macro::TokenStream;
use quote::{quote, ToTokens};
use syn::{parse_macro_input, Expr};
#[proc_macro]
pub fn eager(input: TokenStream) -> TokenStream {
let expr = parse_macro_input!(input as Expr);
let expanded = expand_macros(expr);
TokenStream::from(quote! { #expanded })
}
fn expand_macros(expr: Expr) -> Expr {
match expr {
Expr::Macro(expr_macro) => {
let macro_tokens = expr_macro.mac.to_token_stream();
match syn::parse2::<Expr>(macro_tokens) {
Ok(parsed_expr) => {
expand_macros(parsed_expr)
}
Err(_) => {
Expr::Verbatim(quote! {})
}
}
}
_ => expr,
}
}
```
in `main.rs`:
```Rust
use my_macros::eager;
macro_rules! test_macro {
($input:expr) => {
concat!("test macro applied: ", $input)
};
}
fn main() {
let out = eager!(test_macro!("test"));
}
```
tree structure:
```
├── Cargo.lock
├── Cargo.toml
└── src
├── lib.rs
└── tests
├── Cargo.lock
├── Cargo.toml
└── src
└── main.rs
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
also happens when running `cargo +nightly build`
### Error output
```
evan@DESKTOP-4LK1HQ6:~/my_macros/src/tests$ cargo build
Compiling proc-macro2 v1.0.86
Compiling unicode-ident v1.0.12
Compiling quote v1.0.37
Compiling syn v2.0.77
Compiling my_macros v0.1.0 (/home/evan/my_macros)
Compiling my_macros_tests v0.1.0 (/home/evan/my_macros/src/tests)
error: could not compile `my_macros_tests` (bin "my_macros_tests")
Caused by:
process didn't exit successfully: `/home/evan/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name my_macros_tests --edition=2021 src/main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=60 --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values())' -C metadata=7c1fe1052e96595b -C extra-filename=-7c1fe1052e96595b --out-dir /home/evan/my_macros/src/tests/target/debug/deps -C incremental=/home/evan/my_macros/src/tests/target/debug/incremental -L dependency=/home/evan/my_macros/src/tests/target/debug/deps --extern my_macros=/home/evan/my_macros/src/tests/target/debug/deps/libmy_macros-c5775cc6c3a5c013.so` (signal: 11, SIGSEGV: invalid memory reference)
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
</details>
| T-compiler,C-bug,A-proc-macros | low | Critical |
2,512,577,216 | go | x/tools/gopls: upstream feature requests | This issue is a place to record all pending upstream requests for changes to LSP and/or VS Code.
- [ ] https://github.com/microsoft/language-server-protocol/issues/2014
- [ ] https://github.com/microsoft/language-server-protocol/issues/1911
- [ ] https://github.com/microsoft/language-server-protocol/issues/1466
- [ ] https://github.com/microsoft/language-server-protocol/issues/1164
- [ ] https://github.com/microsoft/vscode/issues/207634
- [ ] https://github.com/microsoft/language-server-protocol/issues/1885
- [ ] https://github.com/microsoft/language-server-protocol/issues/2037
| gopls,Tools | low | Minor |
2,512,590,160 | pytorch | [MPS] Possible persistent infinite loop in `nn.ReplicationPad1d` | ### 🐛 Describe the bug
⚠️ Please don't try to reproduce unless you're prepared to reboot your system.
Edge-case input leads to what seems to be a persistent infinite loop. This issue was discovered in #134184 and left as follow-up work. There is also a correctness bug in `nn.ReplicationPad1d` (see #135447).
`shape = [65536, 2, 4]` in the MRE below, results in 100% GPU utilization, which will persist even if the Python process is killed. Other shapes (eg. `shape = [65535, 2, 4]` or `shape = [65537, 2, 4]`) will not produce this behavior.
```python
import torch
device = 'mps'
shape = [65536, 2, 4]
pl, pr = 3, 4
x = torch.randn(shape, device=device, requires_grad=True)
model = torch.nn.ReplicationPad1d((pl, pr))
out = model(x)
g = torch.randn_like(out)
out.backward(g)
print(x.grad[:, :, 1 : -1])
```
### Versions
PyTorch version: 2.5.0a0+git042f2f7
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.1
Libc version: N/A
Python version: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:13:44) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-14.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] optree==0.12.1
[pip3] torch==2.5.0a0+git042f2f7
[pip3] torch-tb-profiler==0.4.3
[pip3] torchvision==0.20.0a0+0d80848
[pip3] triton==3.0.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] torch 2.5.0a0+git042f2f7 dev_0 <develop>
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchvision 0.20.0a0+0d80848 dev_0 <develop>
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | high priority,module: performance,triaged,module: mps | medium | Critical |
2,512,608,824 | TypeScript | [ServerErrors][TypeScript] 5.7.0-dev.20240904 vs 5.5.4 | The following errors were reported by 5.7.0-dev.20240904 vs 5.5.4
[Pipeline that generated this bug](https://typescript.visualstudio.com/TypeScript/_build?definitionId=48)
[Logs for the pipeline run](https://typescript.visualstudio.com/TypeScript/_build/results?buildId=163526)
[File that generated the pipeline](https://github.com/microsoft/typescript-error-deltas/blob/main/azure-pipelines-gitTests.yml)
This run considered 300 popular TS repos from GH (after skipping the top 0).
<details>
<summary>Successfully analyzed 283 of 300 visited repos</summary>
| Outcome | Count |
|---------|-------|
| Detected interesting changes | 15 |
| Detected no interesting changes | 268 |
| Git clone failed | 3 |
| Language service disabled in new TS | 1 |
| Unknown failure | 13 |
</details>
## Investigation Status
| Repo | Errors | Outcome |
|------|--------|---------|
|mobx|stack overflow|repros on both 5.7 and 5.5 locally|
|angular-cli|stack overflow|repros on both 5.7 and 5.5 locally| | Bug | medium | Critical |
2,512,613,813 | pytorch | [MPS] Correctness issue in backward pass of `nn.ReplicationPad1d` and `nn.ReplicationPad2d` | ### 🐛 Describe the bug
This issue was discovered in #134184 and left as follow-up work.
There's a correctness issue in `nn.ReplicationPad1d` and `nn.ReplicationPad2d` with certain shapes as input. The reproducer below is for `nn.ReplicationPad1d`, and the first shape does not show any correctness issues, but the second shape does.
```python
import torch
shapes = ([2, 65736, 4], [65736, 2, 4])
pl, pr = 3, 4
for shape in shapes:
x_cpu = torch.randn(shape, device='cpu', requires_grad=True)
x_mps = x_cpu.clone().detach().to('mps').requires_grad_(True)
model = torch.nn.ReplicationPad1d((pl, pr))
# forward
out_cpu = model(x_cpu)
out_mps = model(x_mps)
print(f"{((x_cpu - x_mps.cpu()).abs() > 1e-5).sum() = }")
# backward
g_cpu = torch.randn_like(out_cpu)
g_mps = g_cpu.clone().detach().to('mps').requires_grad_(True)
print(f"{((g_cpu - g_mps.cpu()).abs() > 1e-5).sum() = }")
out_cpu.backward(g_cpu)
out_mps.backward(g_mps)
print(f"{((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = }")
print()
# Output:
# ((x_cpu - x_mps.cpu()).abs() > 1e-5).sum() = tensor(0)
# ((g_cpu - g_mps.cpu()).abs() > 1e-5).sum() = tensor(0)
# ((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = tensor(0)
#
# ((x_cpu - x_mps.cpu()).abs() > 1e-5).sum() = tensor(0)
# ((g_cpu - g_mps.cpu()).abs() > 1e-5).sum() = tensor(0)
# ((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = tensor(524283)
```
A reproducer for `nn.ReplicationPad2d` can be found in https://github.com/pytorch/pytorch/blob/defb515306fc53ec62e92937a5a76fa5cbc05b84/test/test_nn.py#L8612-L8656
### Versions
PyTorch version: 2.5.0a0+git042f2f7
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.1
Libc version: N/A
Python version: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:13:44) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-14.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] optree==0.12.1
[pip3] torch==2.5.0a0+git042f2f7
[pip3] torch-tb-profiler==0.4.3
[pip3] torchvision==0.20.0a0+0d80848
[pip3] triton==3.0.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] torch 2.5.0a0+git042f2f7 dev_0 <develop>
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchvision 0.20.0a0+0d80848 dev_0 <develop>
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @kulinseth @malfet @DenisVieriu97 @jhavukainen | module: autograd,triaged,module: correctness (silent),module: mps | low | Critical |
2,512,624,807 | godot | CompositorEffect in Mobile renderer throws "Image needs the TEXTURE_USAGE_STORAGE_BIT usage flag" | ### Tested versions
- Reproducibe in: v4.3.stable
### System information
Godot v4.3.stable unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Wed, 04 Sep 2024 15:16:37 +0000 - Tty - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 3060 Laptop GPU - 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz (16 Threads)
### Issue description
I'm trying to use compositor effects in godot 4.3 using the mobile renderer, according to the docs it should work (https://docs.godotengine.org/en/stable/tutorials/rendering/compositor.html first note says "only supported by the Mobile and Forward+ renderers"). I've followed the tutorial in the docs and i've implemented a simple invert color effect for testing, it works with the Forward+ renderer but it breaks when using Mobile.
The errors printed to console are:
* Image (binding: 0, index 0) needs the TEXTURE_USAGE_STORAGE_BIT usage flag set in order to be used as uniform.
* servers/rendering/renderer_rd/uniform_set_cache_rd.h:130 - Condition "rid.is_null()" is true. Returning: rid
* servers/rendering/rendering_device.cpp:4392 - Parameter "uniform_set" is null.
* Uniforms were never supplied for set (0) at the time of drawing, which are required by the pipeline.
### Steps to reproduce
Follow the tutorial from the docs using the Mobile renderer.
### Minimal reproduction project (MRP)
https://github.com/manuelmaceira/MRP-CompositorEffectsMobile | bug,topic:rendering,confirmed | low | Critical |
2,512,633,446 | vscode | Persisted files associated with a test run | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
I'd like to persist files in such a way that they're associated with a test run. Currently I'm storing files in the extension's storage directory and I use `TestRun.onDidDispose` to delete files when a test run is discarded, but that cleanup only works until VSCode is closed. Test runs (can) persist across restarts and it would be valuable to retain associated files across restarts but as far as I can tell there's no way for me to inform VSCode, "When {an old test run loaded from disk} is cleared, also clear {an associated file}". If old test runs were exposed in some way, I could potentially scan those and use onDidDispose, but A) AFAIK old runs are not exposed and B) I don't have any real way of differentiating multiple unnamed runs so I don't know which run should be associated with which file.
CC @connor4312 | feature-request,testing | low | Major |
2,512,664,279 | tauri | [bug] Conflicts with DLLS written by go | ### Describe the bug
When I was writing tauri2.0-rc, I used rust to call the dll compiled by go, but I found that after the dll's calling method was executed, the UI would exit without printing the wrong stack content, just like calling exit. But neither my dll nor rust code calls a similar method, I wonder why it causes the exit
### Reproduction
_No response_
### Expected behavior
After testing, the dll written by C++ is called, and no exit phenomenon is found after the execution of the method. However, the completion of the execution of the dll generated by go and other languages will cause the end of the UI process
### Full `tauri info` output
```text
pnpm tauri dev
```
### Stack trace
```text
He exits after printing without stack
```
### Additional context
_No response_ | type: bug,help wanted,status: needs more info,status: needs triage | low | Critical |
2,512,681,210 | pytorch | DISABLED test_embedding_dynamic_shapes_cpu (__main__.DynamicShapesCodegenCpuTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_embedding_dynamic_shapes_cpu&suite=DynamicShapesCodegenCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29842062729).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_embedding_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 4878, in test_embedding
self.common(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 399, in common
return check_codegen(
^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 78, in check_codegen
_, code = run_and_get_cpp_code(run, *example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/_inductor/utils.py", line 1936, in run_and_get_cpp_code
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 72, in run
def run(*ex, **kwargs):
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1100, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 308, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 98, in g
return f(*args)
^^^^^^^^
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: std::bad_alloc
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenCpuTests.test_embedding_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | medium | Critical |
2,512,697,187 | PowerToys | MouseWithoutBorders | ### Microsoft PowerToys version
0.84.0
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Restart both computers and make sure v0.84.0 update is installed on windows. One is running as admin and one is not. Service mode is off on both. Move mouse to edge of screen.
### ✔️ Expected Behavior
Mouse to pass from one screen to the other when touching the edge. This used to work before the v0.84.0 update
### ❌ Actual Behavior
After v0.84.0 update, the mouse passing from one screen to another completely stopped working. The mouse can no longer move between devices
### Other Software
Windows | Issue-Bug,Needs-Triage | low | Minor |
2,512,698,575 | PowerToys | FEATURE REQUEST : : PowerToys : Keyboard Manager : REQUEST ability to remap a Single Key (Esc) for example, to a specific App, and have that specific App use a Shortcut instead | ### Description of the new feature / enhancement
1. I have multiple apps that I want consistent "Esc" function with.
2. I can not assign the "Esc" key to Remap for a Specific App && Remap that to a Shortcut for a Specific App.
3. It would be more helpful IF : : I could select a SPECIFIC APP to "Remap a Key" to a APP-SPECIFIC "Shortcut/Chord"
4. Example: Remap an app that uses the "Esc" Key to "Exit", but my OTHER Apps I have Remapped to the "ALT+ESC" Shortcut KEY COMBO, and I want ALL the Apps I use for that "Worflow" to use the SAME "KEY COMBO" to "Exit".... in this case, for example; "ALT+ESC". I CAN NOT DO THIS currently with PowerToys - Keyboard Manager.
5. Please EXTEND Keyboard Manager to allow 'Remap a Key' and 'Remap a Shortcut' to BLEND: NOT REQUIRE "SINGLE KEYS" _WITHOUT_ a "_Modifier Key_ at Beginning" to be "Remapped" as a "KEY+COMBO" ...and filter that Remap to work with a SPECIFIC APP as the 'Remap A Shortcut' works now.
THIS WOULD BE EXTREMEMLY HELPFUL!!!
PS if there is a BETTER way to Accomplish what I want to do that you know of, please, suggest! This issue keeps Popping me out of a "Flow State" for various tasks...
### Scenario when this would be used?
Example: Remap an app that uses the "Esc" Key to "Exit", but my OTHER Apps I have Remapped to the "ALT+ESC" Shortcut KEY COMBO, and I want ALL the Apps I use for that "Worflow" to use the SAME "KEY COMBO" to "Exit".... in this case, for example; "ALT+ESC". I CAN NOT DO THIS currently with PowerToys - Keyboard Manager.
SCENARIO WHEN USED : ANY TIME I need 100% 'reliable' Muscle Memory for a grouping of tasks aka a "Specific Workflow" that do not ship with 100% IDENTICAL Events with 100% IDENTICAL Keycap KEY / KEY+COMBO Events....
### Supporting information
If my REQUEST is only as CLEAR AS MUD, (lol) Please reply with such, so I can attempt to clarify what I wish/need/am Requesting.
Thanks! | Needs-Triage | low | Minor |
2,512,737,034 | pytorch | Partitioner error when functionalizing RNG state for LLAMA2 + LoRA 70b decoder layer graph | ### 🐛 Describe the bug
When doing pretraining of LLAMA2 70b model with LoRA with torch.compile in our environment, we saw an exception raised from partitioner.py while splitting the Joint graph to forward and backward graph.
```
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1390, in call_user_compiler
[rank3]: compiled_fn = compiler_fn(gm, self.example_inputs())
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
[rank3]: compiled_gm = compiler_fn(gm, example_inputs)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 1990, in __call__
[rank3]: return self.compiler_fn(model_, inputs_, **self.kwargs)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/dynamo/compile_backend/backends.py", line 46, in hpu_backend
[rank3]: return aot_autograd(
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/backends/common.py", line 69, in __call__
[rank3]: cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 954, in aot_module_simplified
[rank3]: compiled_fn, _ = create_aot_dispatcher_function(
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
[rank3]: r = func(*args, **kwargs)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 687, in create_aot_dispatcher_function
[rank3]: compiled_fn, fw_metadata = compiler_fn(
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 290, in aot_dispatch_autograd
[rank3]: fw_module, bw_module = aot_config.partition_fn(
[rank3]: File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/dynamo/compile_backend/partition_fn.py", line 100, in hpu_partition
[rank3]: return default_partition(joint_module, _joint_inputs, num_fwd_outputs=num_fwd_outputs)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/partitioners.py", line 380, in default_partition
[rank3]: return min_cut_rematerialization_partition(
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/partitioners.py", line 1774, in min_cut_rematerialization_partition
[rank3]: fw_module, bw_module = functionalize_rng_ops(
[rank3]: File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/partitioners.py", line 653, in functionalize_rng_ops
[rank3]: bw_node = bw_graph_rng_ops[node.name]
[rank3]: torch._dynamo.exc.BackendCompilerFailed: backend='hpu_backend' raised:
[rank3]: KeyError: 'native_dropout_3'
```
Because the layer have dropout ops in the forward, and we have activation checkpointing enable on the decoder layer, the AOT needs to do functionalization of the RNG related ops. The issue happens in functionalize_rng_ops.
Because the environment of running is complex and has many component, I dumped the graph modules (Joint graph and splitted forward module and backward module) right before functionalize_rng_ops. (These are not final forward module and backward module returning to the user)
From these graphs, we know that the backward graph has no native_dropout_3 op which exists on forward. This breaks the current code assumption. (Maybe the graph split is right and this is the right case to consider)
```
joint_module joint_helper()
def forward(self, primals_1, primals_2, primals_3, primals_4, primals_5, primals_6, primals_7, primals_8, primals_9, primals_10, primals_11, primals_12, primals_13, primals_14, primals_15, primals_16, primals_17, primals_18, primals_19, primals_20, primals_21, primals_22, primals_23, primals_24, tangents_1):
rms_norm = torch.ops.hpu.rms_norm.default(primals_22, primals_1, 1e-05)
getitem = rms_norm[0]
getitem_1 = rms_norm[1]; rms_norm = None
transpose = torch.ops.aten.transpose.int(primals_2, 0, 1); primals_2 = None
mul = primals_20 * primals_21
view = torch.ops.aten.view.default(getitem, [mul, 8192])
mm = torch.ops.aten.mm.default(view, transpose); transpose = None
_unsafe_view = torch.ops.aten._unsafe_view.default(mm, [primals_20, primals_21, 8192]); mm = None
native_dropout = torch.ops.aten.native_dropout.default(getitem, 0.05, True)
getitem_2 = native_dropout[0]; native_dropout = None
transpose_1 = torch.ops.aten.transpose.int(primals_3, 0, 1); primals_3 = None
view_1 = torch.ops.aten.view.default(getitem_2, [mul, 8192])
mm_1 = torch.ops.aten.mm.default(view_1, transpose_1); view_1 = transpose_1 = None
_unsafe_view_1 = torch.ops.aten._unsafe_view.default(mm_1, [primals_20, primals_21, 4]); mm_1 = None
transpose_2 = torch.ops.aten.transpose.int(primals_4, 0, 1); primals_4 = None
view_2 = torch.ops.aten.view.default(_unsafe_view_1, [mul, 4])
mm_2 = torch.ops.aten.mm.default(view_2, transpose_2); view_2 = transpose_2 = None
_unsafe_view_2 = torch.ops.aten._unsafe_view.default(mm_2, [primals_20, primals_21, 8192]); mm_2 = None
mul_3 = torch.ops.aten.mul.Tensor(_unsafe_view_2, 4.0); _unsafe_view_2 = None
add = torch.ops.aten.add.Tensor(_unsafe_view, mul_3); _unsafe_view = mul_3 = None
transpose_3 = torch.ops.aten.transpose.int(primals_5, 0, 1); primals_5 = None
mm_3 = torch.ops.aten.mm.default(view, transpose_3); transpose_3 = None
_unsafe_view_3 = torch.ops.aten._unsafe_view.default(mm_3, [primals_20, primals_21, 1024]); mm_3 = None
native_dropout_1 = torch.ops.aten.native_dropout.default(getitem, 0.05, True)
getitem_4 = native_dropout_1[0]; native_dropout_1 = None
transpose_4 = torch.ops.aten.transpose.int(primals_6, 0, 1); primals_6 = None
view_4 = torch.ops.aten.view.default(getitem_4, [mul, 8192])
mm_4 = torch.ops.aten.mm.default(view_4, transpose_4); view_4 = transpose_4 = None
_unsafe_view_4 = torch.ops.aten._unsafe_view.default(mm_4, [primals_20, primals_21, 4]); mm_4 = None
transpose_5 = torch.ops.aten.transpose.int(primals_7, 0, 1); primals_7 = None
view_5 = torch.ops.aten.view.default(_unsafe_view_4, [mul, 4])
mm_5 = torch.ops.aten.mm.default(view_5, transpose_5); view_5 = transpose_5 = None
_unsafe_view_5 = torch.ops.aten._unsafe_view.default(mm_5, [primals_20, primals_21, 1024]); mm_5 = None
mul_7 = torch.ops.aten.mul.Tensor(_unsafe_view_5, 4.0); _unsafe_view_5 = None
add_1 = torch.ops.aten.add.Tensor(_unsafe_view_3, mul_7); _unsafe_view_3 = mul_7 = None
transpose_6 = torch.ops.aten.transpose.int(primals_8, 0, 1); primals_8 = None
mm_6 = torch.ops.aten.mm.default(view, transpose_6); view = transpose_6 = None
_unsafe_view_6 = torch.ops.aten._unsafe_view.default(mm_6, [primals_20, primals_21, 1024]); mm_6 = None
native_dropout_2 = torch.ops.aten.native_dropout.default(getitem, 0.05, True); getitem = None
getitem_6 = native_dropout_2[0]; native_dropout_2 = None
transpose_7 = torch.ops.aten.transpose.int(primals_9, 0, 1); primals_9 = None
view_7 = torch.ops.aten.view.default(getitem_6, [mul, 8192])
mm_7 = torch.ops.aten.mm.default(view_7, transpose_7); view_7 = transpose_7 = None
_unsafe_view_7 = torch.ops.aten._unsafe_view.default(mm_7, [primals_20, primals_21, 4]); mm_7 = None
transpose_8 = torch.ops.aten.transpose.int(primals_10, 0, 1); primals_10 = None
view_8 = torch.ops.aten.view.default(_unsafe_view_7, [mul, 4])
mm_8 = torch.ops.aten.mm.default(view_8, transpose_8); view_8 = transpose_8 = None
_unsafe_view_8 = torch.ops.aten._unsafe_view.default(mm_8, [primals_20, primals_21, 1024]); mm_8 = None
mul_11 = torch.ops.aten.mul.Tensor(_unsafe_view_8, 4.0); _unsafe_view_8 = None
add_2 = torch.ops.aten.add.Tensor(_unsafe_view_6, mul_11); _unsafe_view_6 = mul_11 = None
view_9 = torch.ops.aten.view.default(add, [primals_20, primals_21, 64, 128]); add = None
transpose_9 = torch.ops.aten.transpose.int(view_9, 1, 2); view_9 = None
view_10 = torch.ops.aten.view.default(add_1, [primals_20, primals_21, -1, 128]); add_1 = None
transpose_10 = torch.ops.aten.transpose.int(view_10, 1, 2); view_10 = None
view_11 = torch.ops.aten.view.default(add_2, [primals_20, primals_21, -1, 128]); add_2 = None
transpose_11 = torch.ops.aten.transpose.int(view_11, 1, 2); view_11 = None
slice_1 = torch.ops.aten.slice.Tensor(primals_18, 0, 0, primals_21); primals_18 = None
slice_2 = torch.ops.aten.slice.Tensor(primals_19, 0, 0, primals_21); primals_19 = None
unsqueeze = torch.ops.aten.unsqueeze.default(slice_1, 0); slice_1 = None
unsqueeze_1 = torch.ops.aten.unsqueeze.default(unsqueeze, 0); unsqueeze = None
unsqueeze_2 = torch.ops.aten.unsqueeze.default(slice_2, 0); slice_2 = None
unsqueeze_3 = torch.ops.aten.unsqueeze.default(unsqueeze_2, 0); unsqueeze_2 = None
rotary_pos_embedding = torch.ops.hpu.rotary_pos_embedding.default(transpose_9, unsqueeze_3, unsqueeze_1, primals_24, 0, 0); transpose_9 = None
rotary_pos_embedding_1 = torch.ops.hpu.rotary_pos_embedding.default(transpose_10, unsqueeze_3, unsqueeze_1, primals_24, 0, 0); transpose_10 = unsqueeze_3 = unsqueeze_1 = primals_24 = None
view_12 = torch.ops.aten.view.default(rotary_pos_embedding, [primals_20, 8, 8, primals_21, 128]); rotary_pos_embedding = None
view_13 = torch.ops.aten.view.default(rotary_pos_embedding_1, [primals_20, 8, 1, primals_21, 128]); rotary_pos_embedding_1 = None
view_14 = torch.ops.aten.view.default(transpose_11, [primals_20, 8, 1, primals_21, 128]); transpose_11 = None
sdpa_fwd_non_dropout = torch.ops.hpu.sdpa_fwd_non_dropout.default(view_12, view_13, view_14, None, 0.0, 0.08838834764831843, True, 'none', None, 'left'); view_12 = view_13 = view_14 = None
getitem_8 = sdpa_fwd_non_dropout[0]; sdpa_fwd_non_dropout = None
view_15 = torch.ops.aten.view.default(getitem_8, [primals_20, 64, primals_21, 128]); getitem_8 = None
transpose_12 = torch.ops.aten.transpose.int(view_15, 1, 2); view_15 = None
clone_4 = torch.ops.aten.clone.default(transpose_12, memory_format = torch.contiguous_format); transpose_12 = None
view_16 = torch.ops.aten.view.default(clone_4, [primals_20, primals_21, -1]); clone_4 = None
transpose_13 = torch.ops.aten.transpose.int(primals_11, 0, 1); primals_11 = None
view_17 = torch.ops.aten.view.default(view_16, [mul, 8192])
mm_9 = torch.ops.aten.mm.default(view_17, transpose_13); view_17 = transpose_13 = None
_unsafe_view_9 = torch.ops.aten._unsafe_view.default(mm_9, [primals_20, primals_21, 8192]); mm_9 = None
native_dropout_3 = torch.ops.aten.native_dropout.default(view_16, 0.05, True); view_16 = None
getitem_11 = native_dropout_3[0]; native_dropout_3 = None
transpose_14 = torch.ops.aten.transpose.int(primals_12, 0, 1); primals_12 = None
view_18 = torch.ops.aten.view.default(getitem_11, [mul, 8192])
mm_10 = torch.ops.aten.mm.default(view_18, transpose_14); view_18 = transpose_14 = None
_unsafe_view_10 = torch.ops.aten._unsafe_view.default(mm_10, [primals_20, primals_21, 4]); mm_10 = None
transpose_15 = torch.ops.aten.transpose.int(primals_13, 0, 1); primals_13 = None
view_19 = torch.ops.aten.view.default(_unsafe_view_10, [mul, 4])
mm_11 = torch.ops.aten.mm.default(view_19, transpose_15); view_19 = transpose_15 = None
_unsafe_view_11 = torch.ops.aten._unsafe_view.default(mm_11, [primals_20, primals_21, 8192]); mm_11 = None
mul_15 = torch.ops.aten.mul.Tensor(_unsafe_view_11, 4.0); _unsafe_view_11 = None
add_3 = torch.ops.aten.add.Tensor(_unsafe_view_9, mul_15); _unsafe_view_9 = mul_15 = None
add_4 = torch.ops.aten.add.Tensor(add_3, primals_22); add_3 = None
rms_norm_1 = torch.ops.hpu.rms_norm.default(add_4, primals_14, 1e-05)
getitem_13 = rms_norm_1[0]
getitem_14 = rms_norm_1[1]; rms_norm_1 = None
transpose_16 = torch.ops.aten.transpose.int(primals_15, 0, 1); primals_15 = None
view_20 = torch.ops.aten.view.default(getitem_13, [mul, 8192]); getitem_13 = None
mm_12 = torch.ops.aten.mm.default(view_20, transpose_16)
_unsafe_view_12 = torch.ops.aten._unsafe_view.default(mm_12, [primals_20, primals_21, 28672]); mm_12 = None
silu = torch.ops.aten.silu.default(_unsafe_view_12)
transpose_17 = torch.ops.aten.transpose.int(primals_16, 0, 1); primals_16 = None
mm_13 = torch.ops.aten.mm.default(view_20, transpose_17); view_20 = None
_unsafe_view_13 = torch.ops.aten._unsafe_view.default(mm_13, [primals_20, primals_21, 28672]); mm_13 = None
mul_18 = torch.ops.aten.mul.Tensor(silu, _unsafe_view_13)
transpose_18 = torch.ops.aten.transpose.int(primals_17, 0, 1); primals_17 = None
view_22 = torch.ops.aten.view.default(mul_18, [mul, 28672]); mul_18 = None
mm_14 = torch.ops.aten.mm.default(view_22, transpose_18); view_22 = None
_unsafe_view_14 = torch.ops.aten._unsafe_view.default(mm_14, [primals_20, primals_21, 8192]); mm_14 = None
add_5 = torch.ops.aten.add.Tensor(_unsafe_view_14, add_4); _unsafe_view_14 = None
view_23 = torch.ops.aten.view.default(tangents_1, [mul, 8192])
transpose_19 = torch.ops.aten.transpose.int(transpose_18, 0, 1); transpose_18 = None
mm_15 = torch.ops.aten.mm.default(view_23, transpose_19); view_23 = transpose_19 = None
view_24 = torch.ops.aten.view.default(mm_15, [primals_20, primals_21, 28672]); mm_15 = None
mul_20 = torch.ops.aten.mul.Tensor(view_24, silu); silu = None
mul_21 = torch.ops.aten.mul.Tensor(view_24, _unsafe_view_13); view_24 = _unsafe_view_13 = None
view_25 = torch.ops.aten.view.default(mul_20, [mul, 28672]); mul_20 = None
transpose_20 = torch.ops.aten.transpose.int(transpose_17, 0, 1); transpose_17 = None
mm_16 = torch.ops.aten.mm.default(view_25, transpose_20); view_25 = transpose_20 = None
view_26 = torch.ops.aten.view.default(mm_16, [primals_20, primals_21, 8192]); mm_16 = None
sigmoid = torch.ops.aten.sigmoid.default(_unsafe_view_12)
full = torch.ops.aten.full.default([primals_20, primals_21, 28672], 1, dtype = torch.bfloat16, layout = torch.strided, device = device(type='hpu', index=0), pin_memory = False)
sub = torch.ops.aten.sub.Tensor(full, sigmoid); full = None
mul_22 = torch.ops.aten.mul.Tensor(_unsafe_view_12, sub); _unsafe_view_12 = sub = None
add_6 = torch.ops.aten.add.Scalar(mul_22, 1); mul_22 = None
mul_23 = torch.ops.aten.mul.Tensor(sigmoid, add_6); sigmoid = add_6 = None
mul_24 = torch.ops.aten.mul.Tensor(mul_21, mul_23); mul_21 = mul_23 = None
view_27 = torch.ops.aten.view.default(mul_24, [mul, 28672]); mul_24 = mul = None
transpose_21 = torch.ops.aten.transpose.int(transpose_16, 0, 1); transpose_16 = None
mm_17 = torch.ops.aten.mm.default(view_27, transpose_21); view_27 = transpose_21 = None
view_28 = torch.ops.aten.view.default(mm_17, [primals_20, primals_21, 8192]); mm_17 = None
add_7 = torch.ops.aten.add.Tensor(view_26, view_28); view_26 = view_28 = None
rms_norm_backward = torch.ops.hpu.rms_norm_backward.default(add_7, add_4, primals_14, getitem_14, True, 0); add_7 = add_4 = primals_14 = getitem_14 = None
getitem_15 = rms_norm_backward[0]; rms_norm_backward = None
add_8 = torch.ops.aten.add.Tensor(tangents_1, getitem_15); tangents_1 = getitem_15 = None
mul_25 = torch.ops.aten.mul.Tensor(add_8, 4.0)
view_29 = torch.ops.aten.view.default(mul_25, [-1, 8192]); mul_25 = None
transpose_22 = torch.ops.aten.transpose.int(view_29, 0, 1); view_29 = None
view_30 = torch.ops.aten.view.default(_unsafe_view_10, [-1, 4]); _unsafe_view_10 = None
mm_18 = torch.ops.aten.mm.default(transpose_22, view_30); transpose_22 = view_30 = None
full_1 = torch.ops.aten.full.default([primals_20, primals_21, 4], 0, dtype = torch.bfloat16, layout = torch.strided, device = device(type='hpu', index=0), pin_memory = False)
view_31 = torch.ops.aten.view.default(full_1, [-1, 4]); full_1 = None
transpose_23 = torch.ops.aten.transpose.int(view_31, 0, 1); view_31 = None
view_32 = torch.ops.aten.view.default(getitem_11, [-1, 8192]); getitem_11 = None
mm_19 = torch.ops.aten.mm.default(transpose_23, view_32); view_32 = None
full_2 = torch.ops.aten.full.default([primals_20, primals_21, 1024], 0, dtype = torch.bfloat16, layout = torch.strided, device = device(type='hpu', index=0), pin_memory = False)
view_37 = torch.ops.aten.view.default(full_2, [-1, 1024]); full_2 = None
transpose_26 = torch.ops.aten.transpose.int(view_37, 0, 1); view_37 = None
view_38 = torch.ops.aten.view.default(_unsafe_view_7, [-1, 4]); _unsafe_view_7 = None
mm_21 = torch.ops.aten.mm.default(transpose_26, view_38); view_38 = None
view_40 = torch.ops.aten.view.default(getitem_6, [-1, 8192]); getitem_6 = None
mm_22 = torch.ops.aten.mm.default(transpose_23, view_40); view_40 = None
view_42 = torch.ops.aten.view.default(_unsafe_view_4, [-1, 4]); _unsafe_view_4 = None
mm_23 = torch.ops.aten.mm.default(transpose_26, view_42); transpose_26 = view_42 = None
view_44 = torch.ops.aten.view.default(getitem_4, [-1, 8192]); getitem_4 = None
mm_24 = torch.ops.aten.mm.default(transpose_23, view_44); view_44 = None
full_6 = torch.ops.aten.full.default([primals_20, primals_21, 8192], 0, dtype = torch.bfloat16, layout = torch.strided, device = device(type='hpu', index=0), pin_memory = False); primals_20 = primals_21 = None
view_45 = torch.ops.aten.view.default(full_6, [-1, 8192])
transpose_30 = torch.ops.aten.transpose.int(view_45, 0, 1); view_45 = None
view_46 = torch.ops.aten.view.default(_unsafe_view_1, [-1, 4]); _unsafe_view_1 = None
mm_25 = torch.ops.aten.mm.default(transpose_30, view_46); transpose_30 = view_46 = None
view_48 = torch.ops.aten.view.default(getitem_2, [-1, 8192]); getitem_2 = None
mm_26 = torch.ops.aten.mm.default(transpose_23, view_48); transpose_23 = view_48 = None
rms_norm_backward_1 = torch.ops.hpu.rms_norm_backward.default(full_6, primals_22, primals_1, getitem_1, True, 0); full_6 = primals_22 = primals_1 = getitem_1 = None
getitem_17 = rms_norm_backward_1[0]; rms_norm_backward_1 = None
add_9 = torch.ops.aten.add.Tensor(add_8, getitem_17); add_8 = getitem_17 = None
return [add_5, None, None, mm_26, mm_25, None, mm_24, mm_23, None, mm_22, mm_21, None, mm_19, mm_18, None, None, None, None, None, None, None, None, add_9, None, None]
# To see more debug info, please use `graph_module.print_readable()`
fw_module GraphModule()
def forward(self, primals_1, primals_2, primals_3, primals_4, primals_5, primals_6, primals_7, primals_8, primals_9, primals_10, primals_11, primals_12, primals_13, primals_14, primals_15, primals_16, primals_17, primals_18, primals_19, primals_20, primals_21, primals_22, primals_23, primals_24):
rms_norm = torch.ops.hpu.rms_norm.default(primals_22, primals_1, 1e-05)
getitem = rms_norm[0]; rms_norm = None
transpose = torch.ops.aten.transpose.int(primals_2, 0, 1); primals_2 = None
mul = primals_20 * primals_21
view = torch.ops.aten.view.default(getitem, [mul, 8192])
mm = torch.ops.aten.mm.default(view, transpose); transpose = None
_unsafe_view = torch.ops.aten._unsafe_view.default(mm, [primals_20, primals_21, 8192]); mm = None
native_dropout = torch.ops.aten.native_dropout.default(getitem, 0.05, True)
getitem_2 = native_dropout[0]; native_dropout = None
transpose_1 = torch.ops.aten.transpose.int(primals_3, 0, 1); primals_3 = None
view_1 = torch.ops.aten.view.default(getitem_2, [mul, 8192]); getitem_2 = None
mm_1 = torch.ops.aten.mm.default(view_1, transpose_1); view_1 = None
_unsafe_view_1 = torch.ops.aten._unsafe_view.default(mm_1, [primals_20, primals_21, 4]); mm_1 = None
transpose_2 = torch.ops.aten.transpose.int(primals_4, 0, 1); primals_4 = None
view_2 = torch.ops.aten.view.default(_unsafe_view_1, [mul, 4]); _unsafe_view_1 = None
mm_2 = torch.ops.aten.mm.default(view_2, transpose_2); view_2 = transpose_2 = None
_unsafe_view_2 = torch.ops.aten._unsafe_view.default(mm_2, [primals_20, primals_21, 8192]); mm_2 = None
mul_3 = torch.ops.aten.mul.Tensor(_unsafe_view_2, 4.0); _unsafe_view_2 = None
add = torch.ops.aten.add.Tensor(_unsafe_view, mul_3); _unsafe_view = mul_3 = None
transpose_3 = torch.ops.aten.transpose.int(primals_5, 0, 1); primals_5 = None
mm_3 = torch.ops.aten.mm.default(view, transpose_3); transpose_3 = None
_unsafe_view_3 = torch.ops.aten._unsafe_view.default(mm_3, [primals_20, primals_21, 1024]); mm_3 = None
native_dropout_1 = torch.ops.aten.native_dropout.default(getitem, 0.05, True)
getitem_4 = native_dropout_1[0]; native_dropout_1 = None
transpose_4 = torch.ops.aten.transpose.int(primals_6, 0, 1); primals_6 = None
view_4 = torch.ops.aten.view.default(getitem_4, [mul, 8192]); getitem_4 = None
mm_4 = torch.ops.aten.mm.default(view_4, transpose_4); view_4 = None
_unsafe_view_4 = torch.ops.aten._unsafe_view.default(mm_4, [primals_20, primals_21, 4]); mm_4 = None
transpose_5 = torch.ops.aten.transpose.int(primals_7, 0, 1); primals_7 = None
view_5 = torch.ops.aten.view.default(_unsafe_view_4, [mul, 4]); _unsafe_view_4 = None
mm_5 = torch.ops.aten.mm.default(view_5, transpose_5); view_5 = transpose_5 = None
_unsafe_view_5 = torch.ops.aten._unsafe_view.default(mm_5, [primals_20, primals_21, 1024]); mm_5 = None
mul_7 = torch.ops.aten.mul.Tensor(_unsafe_view_5, 4.0); _unsafe_view_5 = None
add_1 = torch.ops.aten.add.Tensor(_unsafe_view_3, mul_7); _unsafe_view_3 = mul_7 = None
transpose_6 = torch.ops.aten.transpose.int(primals_8, 0, 1); primals_8 = None
mm_6 = torch.ops.aten.mm.default(view, transpose_6); view = transpose_6 = None
_unsafe_view_6 = torch.ops.aten._unsafe_view.default(mm_6, [primals_20, primals_21, 1024]); mm_6 = None
native_dropout_2 = torch.ops.aten.native_dropout.default(getitem, 0.05, True); getitem = None
getitem_6 = native_dropout_2[0]; native_dropout_2 = None
transpose_7 = torch.ops.aten.transpose.int(primals_9, 0, 1); primals_9 = None
view_7 = torch.ops.aten.view.default(getitem_6, [mul, 8192]); getitem_6 = None
mm_7 = torch.ops.aten.mm.default(view_7, transpose_7); view_7 = None
_unsafe_view_7 = torch.ops.aten._unsafe_view.default(mm_7, [primals_20, primals_21, 4]); mm_7 = None
transpose_8 = torch.ops.aten.transpose.int(primals_10, 0, 1); primals_10 = None
view_8 = torch.ops.aten.view.default(_unsafe_view_7, [mul, 4]); _unsafe_view_7 = None
mm_8 = torch.ops.aten.mm.default(view_8, transpose_8); view_8 = transpose_8 = None
_unsafe_view_8 = torch.ops.aten._unsafe_view.default(mm_8, [primals_20, primals_21, 1024]); mm_8 = None
mul_11 = torch.ops.aten.mul.Tensor(_unsafe_view_8, 4.0); _unsafe_view_8 = None
add_2 = torch.ops.aten.add.Tensor(_unsafe_view_6, mul_11); _unsafe_view_6 = mul_11 = None
view_9 = torch.ops.aten.view.default(add, [primals_20, primals_21, 64, 128]); add = None
transpose_9 = torch.ops.aten.transpose.int(view_9, 1, 2); view_9 = None
view_10 = torch.ops.aten.view.default(add_1, [primals_20, primals_21, -1, 128]); add_1 = None
transpose_10 = torch.ops.aten.transpose.int(view_10, 1, 2); view_10 = None
view_11 = torch.ops.aten.view.default(add_2, [primals_20, primals_21, -1, 128]); add_2 = None
transpose_11 = torch.ops.aten.transpose.int(view_11, 1, 2); view_11 = None
slice_1 = torch.ops.aten.slice.Tensor(primals_18, 0, 0, primals_21); primals_18 = None
slice_2 = torch.ops.aten.slice.Tensor(primals_19, 0, 0, primals_21); primals_19 = None
unsqueeze = torch.ops.aten.unsqueeze.default(slice_1, 0); slice_1 = None
unsqueeze_1 = torch.ops.aten.unsqueeze.default(unsqueeze, 0); unsqueeze = None
unsqueeze_2 = torch.ops.aten.unsqueeze.default(slice_2, 0); slice_2 = None
unsqueeze_3 = torch.ops.aten.unsqueeze.default(unsqueeze_2, 0); unsqueeze_2 = None
rotary_pos_embedding = torch.ops.hpu.rotary_pos_embedding.default(transpose_9, unsqueeze_3, unsqueeze_1, primals_24, 0, 0); transpose_9 = None
rotary_pos_embedding_1 = torch.ops.hpu.rotary_pos_embedding.default(transpose_10, unsqueeze_3, unsqueeze_1, primals_24, 0, 0); transpose_10 = unsqueeze_3 = unsqueeze_1 = primals_24 = None
view_12 = torch.ops.aten.view.default(rotary_pos_embedding, [primals_20, 8, 8, primals_21, 128]); rotary_pos_embedding = None
view_13 = torch.ops.aten.view.default(rotary_pos_embedding_1, [primals_20, 8, 1, primals_21, 128]); rotary_pos_embedding_1 = None
view_14 = torch.ops.aten.view.default(transpose_11, [primals_20, 8, 1, primals_21, 128]); transpose_11 = None
sdpa_fwd_non_dropout = torch.ops.hpu.sdpa_fwd_non_dropout.default(view_12, view_13, view_14, None, 0.0, 0.08838834764831843, True, 'none', None, 'left'); view_12 = view_13 = view_14 = None
getitem_8 = sdpa_fwd_non_dropout[0]; sdpa_fwd_non_dropout = None
view_15 = torch.ops.aten.view.default(getitem_8, [primals_20, 64, primals_21, 128]); getitem_8 = None
transpose_12 = torch.ops.aten.transpose.int(view_15, 1, 2); view_15 = None
clone_4 = torch.ops.aten.clone.default(transpose_12, memory_format = torch.contiguous_format); transpose_12 = None
view_16 = torch.ops.aten.view.default(clone_4, [primals_20, primals_21, -1]); clone_4 = None
transpose_13 = torch.ops.aten.transpose.int(primals_11, 0, 1); primals_11 = None
view_17 = torch.ops.aten.view.default(view_16, [mul, 8192])
mm_9 = torch.ops.aten.mm.default(view_17, transpose_13); view_17 = transpose_13 = None
_unsafe_view_9 = torch.ops.aten._unsafe_view.default(mm_9, [primals_20, primals_21, 8192]); mm_9 = None
native_dropout_3 = torch.ops.aten.native_dropout.default(view_16, 0.05, True); view_16 = None
getitem_11 = native_dropout_3[0]; native_dropout_3 = None
transpose_14 = torch.ops.aten.transpose.int(primals_12, 0, 1); primals_12 = None
view_18 = torch.ops.aten.view.default(getitem_11, [mul, 8192])
mm_10 = torch.ops.aten.mm.default(view_18, transpose_14); view_18 = transpose_14 = None
_unsafe_view_10 = torch.ops.aten._unsafe_view.default(mm_10, [primals_20, primals_21, 4]); mm_10 = None
transpose_15 = torch.ops.aten.transpose.int(primals_13, 0, 1); primals_13 = None
view_19 = torch.ops.aten.view.default(_unsafe_view_10, [mul, 4])
mm_11 = torch.ops.aten.mm.default(view_19, transpose_15); view_19 = transpose_15 = None
_unsafe_view_11 = torch.ops.aten._unsafe_view.default(mm_11, [primals_20, primals_21, 8192]); mm_11 = None
mul_15 = torch.ops.aten.mul.Tensor(_unsafe_view_11, 4.0); _unsafe_view_11 = None
add_3 = torch.ops.aten.add.Tensor(_unsafe_view_9, mul_15); _unsafe_view_9 = mul_15 = None
add_4 = torch.ops.aten.add.Tensor(add_3, primals_22); add_3 = None
rms_norm_1 = torch.ops.hpu.rms_norm.default(add_4, primals_14, 1e-05)
getitem_13 = rms_norm_1[0]; rms_norm_1 = None
transpose_16 = torch.ops.aten.transpose.int(primals_15, 0, 1); primals_15 = None
view_20 = torch.ops.aten.view.default(getitem_13, [mul, 8192]); getitem_13 = None
mm_12 = torch.ops.aten.mm.default(view_20, transpose_16)
_unsafe_view_12 = torch.ops.aten._unsafe_view.default(mm_12, [primals_20, primals_21, 28672]); mm_12 = None
silu = torch.ops.aten.silu.default(_unsafe_view_12); _unsafe_view_12 = None
transpose_17 = torch.ops.aten.transpose.int(primals_16, 0, 1); primals_16 = None
mm_13 = torch.ops.aten.mm.default(view_20, transpose_17); view_20 = None
_unsafe_view_13 = torch.ops.aten._unsafe_view.default(mm_13, [primals_20, primals_21, 28672]); mm_13 = None
mul_18 = torch.ops.aten.mul.Tensor(silu, _unsafe_view_13); silu = _unsafe_view_13 = None
transpose_18 = torch.ops.aten.transpose.int(primals_17, 0, 1); primals_17 = None
view_22 = torch.ops.aten.view.default(mul_18, [mul, 28672]); mul_18 = mul = None
mm_14 = torch.ops.aten.mm.default(view_22, transpose_18); view_22 = None
_unsafe_view_14 = torch.ops.aten._unsafe_view.default(mm_14, [primals_20, primals_21, 8192]); mm_14 = None
add_5 = torch.ops.aten.add.Tensor(_unsafe_view_14, add_4); _unsafe_view_14 = None
transpose_19 = torch.ops.aten.transpose.int(transpose_18, 0, 1); transpose_18 = None
view_30 = torch.ops.aten.view.default(_unsafe_view_10, [-1, 4]); _unsafe_view_10 = None
view_32 = torch.ops.aten.view.default(getitem_11, [-1, 8192]); getitem_11 = None
return [add_5, primals_1, primals_14, primals_22, transpose_1, transpose_4, transpose_7, add_4, transpose_16, transpose_17, transpose_19, view_30, view_32, primals_20, primals_21]
# To see more debug info, please use `graph_module.print_readable()`
bw_module GraphModule()
def forward(self, primals_20, primals_21, primals_1, primals_14, primals_22, transpose_1, transpose_4, transpose_7, add_4, transpose_16, transpose_17, transpose_19, view_30, view_32, tangents_1):
rms_norm = torch.ops.hpu.rms_norm.default(primals_22, primals_1, 1e-05)
getitem = rms_norm[0]
getitem_1 = rms_norm[1]; rms_norm = None
mul = primals_20 * primals_21
native_dropout = torch.ops.aten.native_dropout.default(getitem, 0.05, True)
getitem_2 = native_dropout[0]; native_dropout = None
view_1 = torch.ops.aten.view.default(getitem_2, [mul, 8192])
mm_1 = torch.ops.aten.mm.default(view_1, transpose_1); view_1 = transpose_1 = None
_unsafe_view_1 = torch.ops.aten._unsafe_view.default(mm_1, [primals_20, primals_21, 4]); mm_1 = None
native_dropout_1 = torch.ops.aten.native_dropout.default(getitem, 0.05, True)
getitem_4 = native_dropout_1[0]; native_dropout_1 = None
view_4 = torch.ops.aten.view.default(getitem_4, [mul, 8192])
mm_4 = torch.ops.aten.mm.default(view_4, transpose_4); view_4 = transpose_4 = None
_unsafe_view_4 = torch.ops.aten._unsafe_view.default(mm_4, [primals_20, primals_21, 4]); mm_4 = None
native_dropout_2 = torch.ops.aten.native_dropout.default(getitem, 0.05, True); getitem = None
getitem_6 = native_dropout_2[0]; native_dropout_2 = None
view_7 = torch.ops.aten.view.default(getitem_6, [mul, 8192])
mm_7 = torch.ops.aten.mm.default(view_7, transpose_7); view_7 = transpose_7 = None
_unsafe_view_7 = torch.ops.aten._unsafe_view.default(mm_7, [primals_20, primals_21, 4]); mm_7 = None
rms_norm_1 = torch.ops.hpu.rms_norm.default(add_4, primals_14, 1e-05)
getitem_13 = rms_norm_1[0]
getitem_14 = rms_norm_1[1]; rms_norm_1 = None
view_20 = torch.ops.aten.view.default(getitem_13, [mul, 8192]); getitem_13 = None
mm_12 = torch.ops.aten.mm.default(view_20, transpose_16)
_unsafe_view_12 = torch.ops.aten._unsafe_view.default(mm_12, [primals_20, primals_21, 28672]); mm_12 = None
silu = torch.ops.aten.silu.default(_unsafe_view_12)
mm_13 = torch.ops.aten.mm.default(view_20, transpose_17); view_20 = None
_unsafe_view_13 = torch.ops.aten._unsafe_view.default(mm_13, [primals_20, primals_21, 28672]); mm_13 = None
view_23 = torch.ops.aten.view.default(tangents_1, [mul, 8192])
mm_15 = torch.ops.aten.mm.default(view_23, transpose_19); view_23 = transpose_19 = None
view_24 = torch.ops.aten.view.default(mm_15, [primals_20, primals_21, 28672]); mm_15 = None
mul_20 = torch.ops.aten.mul.Tensor(view_24, silu); silu = None
mul_21 = torch.ops.aten.mul.Tensor(view_24, _unsafe_view_13); view_24 = _unsafe_view_13 = None
view_25 = torch.ops.aten.view.default(mul_20, [mul, 28672]); mul_20 = None
transpose_20 = torch.ops.aten.transpose.int(transpose_17, 0, 1); transpose_17 = None
mm_16 = torch.ops.aten.mm.default(view_25, transpose_20); view_25 = transpose_20 = None
view_26 = torch.ops.aten.view.default(mm_16, [primals_20, primals_21, 8192]); mm_16 = None
sigmoid = torch.ops.aten.sigmoid.default(_unsafe_view_12)
full = torch.ops.aten.full.default([primals_20, primals_21, 28672], 1, dtype = torch.bfloat16, layout = torch.strided, device = device(type='hpu', index=0), pin_memory = False)
sub = torch.ops.aten.sub.Tensor(full, sigmoid); full = None
mul_22 = torch.ops.aten.mul.Tensor(_unsafe_view_12, sub); _unsafe_view_12 = sub = None
add_6 = torch.ops.aten.add.Scalar(mul_22, 1); mul_22 = None
mul_23 = torch.ops.aten.mul.Tensor(sigmoid, add_6); sigmoid = add_6 = None
mul_24 = torch.ops.aten.mul.Tensor(mul_21, mul_23); mul_21 = mul_23 = None
view_27 = torch.ops.aten.view.default(mul_24, [mul, 28672]); mul_24 = mul = None
transpose_21 = torch.ops.aten.transpose.int(transpose_16, 0, 1); transpose_16 = None
mm_17 = torch.ops.aten.mm.default(view_27, transpose_21); view_27 = transpose_21 = None
view_28 = torch.ops.aten.view.default(mm_17, [primals_20, primals_21, 8192]); mm_17 = None
add_7 = torch.ops.aten.add.Tensor(view_26, view_28); view_26 = view_28 = None
rms_norm_backward = torch.ops.hpu.rms_norm_backward.default(add_7, add_4, primals_14, getitem_14, True, 0); add_7 = add_4 = primals_14 = getitem_14 = None
getitem_15 = rms_norm_backward[0]; rms_norm_backward = None
add_8 = torch.ops.aten.add.Tensor(tangents_1, getitem_15); tangents_1 = getitem_15 = None
mul_25 = torch.ops.aten.mul.Tensor(add_8, 4.0)
view_29 = torch.ops.aten.view.default(mul_25, [-1, 8192]); mul_25 = None
transpose_22 = torch.ops.aten.transpose.int(view_29, 0, 1); view_29 = None
mm_18 = torch.ops.aten.mm.default(transpose_22, view_30); transpose_22 = view_30 = None
full_1 = torch.ops.aten.full.default([primals_20, primals_21, 4], 0, dtype = torch.bfloat16, layout = torch.strided, device = device(type='hpu', index=0), pin_memory = False)
view_31 = torch.ops.aten.view.default(full_1, [-1, 4]); full_1 = None
transpose_23 = torch.ops.aten.transpose.int(view_31, 0, 1); view_31 = None
mm_19 = torch.ops.aten.mm.default(transpose_23, view_32); view_32 = None
full_2 = torch.ops.aten.full.default([primals_20, primals_21, 1024], 0, dtype = torch.bfloat16, layout = torch.strided, device = device(type='hpu', index=0), pin_memory = False)
view_37 = torch.ops.aten.view.default(full_2, [-1, 1024]); full_2 = None
transpose_26 = torch.ops.aten.transpose.int(view_37, 0, 1); view_37 = None
view_38 = torch.ops.aten.view.default(_unsafe_view_7, [-1, 4]); _unsafe_view_7 = None
mm_21 = torch.ops.aten.mm.default(transpose_26, view_38); view_38 = None
view_40 = torch.ops.aten.view.default(getitem_6, [-1, 8192]); getitem_6 = None
mm_22 = torch.ops.aten.mm.default(transpose_23, view_40); view_40 = None
view_42 = torch.ops.aten.view.default(_unsafe_view_4, [-1, 4]); _unsafe_view_4 = None
mm_23 = torch.ops.aten.mm.default(transpose_26, view_42); transpose_26 = view_42 = None
view_44 = torch.ops.aten.view.default(getitem_4, [-1, 8192]); getitem_4 = None
mm_24 = torch.ops.aten.mm.default(transpose_23, view_44); view_44 = None
full_6 = torch.ops.aten.full.default([primals_20, primals_21, 8192], 0, dtype = torch.bfloat16, layout = torch.strided, device = device(type='hpu', index=0), pin_memory = False); primals_20 = primals_21 = None
view_45 = torch.ops.aten.view.default(full_6, [-1, 8192])
transpose_30 = torch.ops.aten.transpose.int(view_45, 0, 1); view_45 = None
view_46 = torch.ops.aten.view.default(_unsafe_view_1, [-1, 4]); _unsafe_view_1 = None
mm_25 = torch.ops.aten.mm.default(transpose_30, view_46); transpose_30 = view_46 = None
view_48 = torch.ops.aten.view.default(getitem_2, [-1, 8192]); getitem_2 = None
mm_26 = torch.ops.aten.mm.default(transpose_23, view_48); transpose_23 = view_48 = None
rms_norm_backward_1 = torch.ops.hpu.rms_norm_backward.default(full_6, primals_22, primals_1, getitem_1, True, 0); full_6 = primals_22 = primals_1 = getitem_1 = None
getitem_17 = rms_norm_backward_1[0]; rms_norm_backward_1 = None
add_9 = torch.ops.aten.add.Tensor(add_8, getitem_17); add_8 = getitem_17 = None
return [None, None, mm_26, mm_25, None, mm_24, mm_23, None, mm_22, mm_21, None, mm_19, mm_18, None, None, None, None, None, None, None, None, add_9, None, None]
# To see more debug info, please use `graph_module.print_readable()`
fw_graph_rng_ops {'native_dropout': native_dropout, 'native_dropout_1': native_dropout_1, 'native_dropout_2': native_dropout_2, 'native_dropout_3': native_dropout_3}
bw_graph_rng_ops {'native_dropout': native_dropout, 'native_dropout_1': native_dropout_1, 'native_dropout_2': native_dropout_2}
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0a0+git95a8420
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-112-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 160
On-line CPU(s) list: 0-159
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 40
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3.8 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 100 MiB (80 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39,80-119
NUMA node1 CPU(s): 40-79,120-159
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] habana-torch-dataloader==xxx
[pip3] habana-torch-plugin==xxx
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.0a0+git95a8420
[pip3] torch_tb_profiler==0.4.0
[pip3] torchaudio==2.4.0a0+69d4077
[pip3] torchdata==0.7.1+5e6f7b7
[pip3] torchmetrics==1.4.1
[pip3] torchtext==0.18.0a0+9bed85d
[pip3] torchvision==0.19.0a0+48b1edf
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @zou3519 @bdhirsh | triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,512,741,733 | transformers | The _crop_past_key_values function should be a member function of Cache. | ### Feature request
The _crop_past_key_values function should be a member function of Cache.
### Motivation
I suppose all models will use Cache class instead of tuple to save past_key_values. It will make more sense to make [_crop_past_key_values](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/candidate_generator.py#L375) as a member function of Cache.
### Your contribution
cc @gante @echarlaix | Feature request,Cache | low | Minor |
2,512,750,246 | react-native | Using absolute positioning, when the parent element has padding, the width height of the current element is incorrectly laid out using a percentage layout | ### Description
```jsx
/**
* Sample React Native App
* https://github.com/facebook/react-native
*
* @format
*/
import React from 'react';
import {
SafeAreaView,
ScrollView,
StatusBar,
useColorScheme,
View,
} from 'react-native';
function App(): React.JSX.Element {
const isDarkMode = useColorScheme() === 'dark';
const backgroundStyle = {
backgroundColor: '#eee',
};
return (
<SafeAreaView style={backgroundStyle}>
<StatusBar
barStyle={isDarkMode ? 'light-content' : 'dark-content'}
backgroundColor={backgroundStyle.backgroundColor}
/>
<ScrollView>
<View
style={{
backgroundColor: '#000',
height: 300,
paddingTop: 200,
paddingLeft: 200
}}>
<View style={{
position: 'absolute',
// width height 100%会排除父元素的padding
width: '100%', height: '100%', left: 0, top: 0,
backgroundColor: '#666'
}} />
</View>
</ScrollView>
</SafeAreaView>
);
}
export default App;
```

In the new architecture, I set the height to 100%, but its height is not equal to the height of the parent container, but will subtract the padding of the parent container
### Steps to reproduce
1
### React Native Version
0.75.2
### Affected Platforms
Runtime - Android, Runtime - iOS
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU: (16) x64 Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz
Memory: 35.26 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.20.3
path: /private/var/folders/rf/gd60z12164z72bny7z_0d4f80000gn/T/xfs-de932bea/node
Yarn:
version: 3.6.4
path: /private/var/folders/rf/gd60z12164z72bny7z_0d4f80000gn/T/xfs-de932bea/yarn
npm:
version: 10.7.0
path: ~/.nvm/versions/node/v18.20.3/bin/npm
Watchman:
version: 2024.07.15.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.1
wanted: 0.75.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
empty
```
### Reproducer
https://github.com/ShaoGongBra/rn-test
### Screenshots and Videos
_No response_ | Issue: Author Provided Repro | low | Major |
2,512,759,296 | node | Please implement WebRTC fully in node. | ### What is the problem this feature will solve?
There are a hundred (mostly outdated) libraries for webrtc, but NONE is 100% compatible with webkit based browsers.
There is always something missing. to the point that some of them import a full headless browser.
IMHO, webrtc in node should be exactly as it is in browsers.
### What is the feature you are proposing to solve the problem?
Consistency.
### What alternatives have you considered?
All possible ones. | feature request,web-standards | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.