id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,621,773,554 | ui | [bug]: SidebarMenuSubItem Sidebar menu action button distorts other components | ### Describe the bug
I am trying to add context buttons to SidebarMenuSubItem but they distort other components.
<img width="253" alt="Bildschirmfoto 2024-10-29 um 17 26 13" src="https://github.com/user-attachments/assets/ba8c6f2e-7e1c-4f40-92e2-d0302453dc6c">
### Affected component/components
Sidebar
### How to reproduce
Add this to your sidebar:
```
<CollapsibleContent>
<SidebarMenuSub>
<SidebarMenuSubItem key={subIndex}>
<SidebarMenuSubButton asChild>
<a href={subItem.url}>
<span>{subItem.title}</span>
</a>
</SidebarMenuSubButton>
<DropdownMenu>
<DropdownMenuTrigger asChild>
<SidebarMenuAction>
<MoreHorizontal />
</SidebarMenuAction>
</DropdownMenuTrigger>
<DropdownMenuContent side="right" align="start">
<DropdownMenuItem>
<span>Edit Project</span>
</DropdownMenuItem>
<DropdownMenuItem>
<span>Delete Project</span>
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
</SidebarMenuSubItem>
</SidebarMenuSub>
</CollapsibleContent>
```
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
MacOS, ARC, React & NextJS
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | enhancement,component: sidebar | low | Critical |
2,621,774,154 | Python | Add Radial Basis Function Neural Network (RBFNN) | ### Feature description
Radial Basis Function Neural Networks (RBFNNs) are a type of neural network that combines elements of clustering and function approximation, making them powerful for both regression and classification tasks. RBFNNs can efficiently model non-linear relationships with fewer parameters than traditional multilayer perceptrons, thanks to their unique architecture that uses radial basis functions (typically Gaussian) in the hidden layer.
This structure allows RBFNNs to approximate functions and decision boundaries with high accuracy while maintaining a relatively simple network structure.
Goals:
<li>Implement an RBFNN class with train and predict functionalities.
<li>Use Gaussian radial basis functions as hidden layer activations.
<li>Include KMeans clustering for initializing RBF centers and least-squares fitting for output weights.
Requirements:
<li>Initialization: RBFNN class should initialize with num_centers and gamma for Gaussian spread.
<li>Training: Add a train(x_data, y_data) method to:
<li>Find RBF centers using KMeans.
<li>Compute RBF activations for each input.
<li>Calculate output weights through least-squares fitting.
<li>Prediction: Implement a predict(x) method to calculate and return predictions based on learned weights. | enhancement | medium | Minor |
2,621,779,246 | flutter | Breaking: DiagnosticsNode.toStringDeep added wrapWidth parameter | PR https://github.com/flutter/flutter/pull/154752 adds a new parameter to the `toStringDeep` method in [3.27.0-0.0.pre](https://github.com/flutter/flutter/releases/tag/3.27.0-0.0.pre)
```diff
String toStringDeep({
String prefixLineOne = '',
String? prefixOtherLines,
DiagnosticLevel minLevel = DiagnosticLevel.debug,
+ int wrapWidth = 65,
}) {
```
This breakes existing Widgets that override `toStringDeep` like this in current state `Flutter 3.24.4`:
```dart
class BetterContainer extends Container {
@override
String toStringDeep({
String prefixLineOne = '',
String? prefixOtherLines,
DiagnosticLevel minLevel = DiagnosticLevel.debug,
}) {
return "Custom toStringDeep Representation";
}
}
```
```
'BetterContainer. toStringDeep' ('String Function({DiagnosticLevel minLevel, String prefixLineOne, String? prefixOtherLines})') isn't a valid override of 'DiagnosticableTree. toStringDeep' ('String Function({DiagnosticLevel minLevel, String prefixLineOne, String? prefixOtherLines, int wrapWidth})'). (Documentation)
The member being overridden (diagnostics.dart:366).
```
Open source packages affected by this change:
- https://github.com/ethanblake4/flutter_eval/blob/master/lib/src/widgets/text.dart#L83
- https://github.com/passsy/spot/blob/main/lib/src/spot/text/any_text.dart#L227
Unfortunately, I couldn't come up with any workaround that would allow supporting the new and old version at the same time. | c: regression,framework,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27 | low | Critical |
2,621,780,878 | pytorch | First run lint on just the changes in the PR before running it over the entire PR | Goal: Usually lint errors are introduced by the dev's own changes. This can let the devs get a red signal on their PR significantly faster
cc @seemethere @malfet @pytorch/pytorch-dev-infra | module: ci,triaged | low | Critical |
2,621,787,500 | flutter | Cocoon cipd package build flakes on homebrew reinstallation permission issue | In PR https://github.com/flutter/cocoon/pull/3992 the `Mac_arm64 ruby` build that packages Ruby and related gems failed:
```
Error: Could not rename m4 keg! Check/fix its permissions:
sudo chown -R swarming /opt/homebrew/Cellar/m4/1.4.19
```
https://ci.chromium.org/ui/p/flutter/builders/try/Mac_arm64%20ruby/35/overview
However I ran it again and it passed https://ci.chromium.org/ui/p/flutter/builders/try/Mac_arm64%20ruby/36/overview
The passing one succeeded at the same place:
```
🍺 /opt/homebrew/Cellar/m4/1.4.19: 40 files, 889.2KB, built in 47 seconds
==> Running `brew cleanup m4`...
```
Both the passing and failing builds ran on the same bot flutter-devicelab-mac-38. | team-infra,P2,c: flake,triaged-infra | low | Critical |
2,621,873,404 | flutter | [go_router_builder] invalid enum query parameter bad state | ### What package does this bug report belong to?
go_router_builder
### What target platforms are you seeing this bug on?
Web
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
_fe_analyzer_shared:
dependency: transitive
description:
name: _fe_analyzer_shared
sha256: f256b0c0ba6c7577c15e2e4e114755640a875e885099367bf6e012b19314c834
url: "https://pub.dev"
source: hosted
version: "72.0.0"
_macros:
dependency: transitive
description: dart
source: sdk
version: "0.3.2"
analyzer:
dependency: transitive
description:
name: analyzer
sha256: b652861553cd3990d8ed361f7979dc6d7053a9ac8843fa73820ab68ce5410139
url: "https://pub.dev"
source: hosted
version: "6.7.0"
args:
dependency: transitive
description:
name: args
sha256: bf9f5caeea8d8fe6721a9c358dd8a5c1947b27f1cfaa18b39c301273594919e6
url: "https://pub.dev"
source: hosted
version: "2.6.0"
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
build:
dependency: transitive
description:
name: build
sha256: "80184af8b6cb3e5c1c4ec6d8544d27711700bc3e6d2efad04238c7b5290889f0"
url: "https://pub.dev"
source: hosted
version: "2.4.1"
build_config:
dependency: transitive
description:
name: build_config
sha256: bf80fcfb46a29945b423bd9aad884590fb1dc69b330a4d4700cac476af1708d1
url: "https://pub.dev"
source: hosted
version: "1.1.1"
build_daemon:
dependency: transitive
description:
name: build_daemon
sha256: "79b2aef6ac2ed00046867ed354c88778c9c0f029df8a20fe10b5436826721ef9"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
build_resolvers:
dependency: transitive
description:
name: build_resolvers
sha256: "339086358431fa15d7eca8b6a36e5d783728cf025e559b834f4609a1fcfb7b0a"
url: "https://pub.dev"
source: hosted
version: "2.4.2"
build_runner:
dependency: "direct dev"
description:
name: build_runner
sha256: "028819cfb90051c6b5440c7e574d1896f8037e3c96cf17aaeb054c9311cfbf4d"
url: "https://pub.dev"
source: hosted
version: "2.4.13"
build_runner_core:
dependency: transitive
description:
name: build_runner_core
sha256: f8126682b87a7282a339b871298cc12009cb67109cfa1614d6436fb0289193e0
url: "https://pub.dev"
source: hosted
version: "7.3.2"
built_collection:
dependency: transitive
description:
name: built_collection
sha256: "376e3dd27b51ea877c28d525560790aee2e6fbb5f20e2f85d5081027d94e2100"
url: "https://pub.dev"
source: hosted
version: "5.1.1"
built_value:
dependency: transitive
description:
name: built_value
sha256: c7913a9737ee4007efedaffc968c049fd0f3d0e49109e778edc10de9426005cb
url: "https://pub.dev"
source: hosted
version: "8.9.2"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
checked_yaml:
dependency: transitive
description:
name: checked_yaml
sha256: feb6bed21949061731a7a75fc5d2aa727cf160b91af9a3e464c5e3a32e28b5ff
url: "https://pub.dev"
source: hosted
version: "2.0.3"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
code_builder:
dependency: transitive
description:
name: code_builder
sha256: "0ec10bf4a89e4c613960bf1e8b42c64127021740fb21640c29c909826a5eea3e"
url: "https://pub.dev"
source: hosted
version: "4.10.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
convert:
dependency: transitive
description:
name: convert
sha256: b30acd5944035672bc15c6b7a8b47d773e41e2f17de064350988c5d02adb1c68
url: "https://pub.dev"
source: hosted
version: "3.1.2"
crypto:
dependency: transitive
description:
name: crypto
sha256: "1e445881f28f22d6140f181e07737b22f1e099a5e1ff94b0af2f9e4a463f4855"
url: "https://pub.dev"
source: hosted
version: "3.0.6"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
dart_style:
dependency: transitive
description:
name: dart_style
sha256: "7856d364b589d1f08986e140938578ed36ed948581fbc3bc9aef1805039ac5ab"
url: "https://pub.dev"
source: hosted
version: "2.3.7"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
file:
dependency: transitive
description:
name: file
sha256: a3b4f84adafef897088c160faf7dfffb7696046cb13ae90b508c2cbc95d3b8d4
url: "https://pub.dev"
source: hosted
version: "7.0.1"
fixnum:
dependency: transitive
description:
name: fixnum
sha256: b6dc7065e46c974bc7c5f143080a6764ec7a4be6da1285ececdc37be96de53be
url: "https://pub.dev"
source: hosted
version: "1.1.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
frontend_server_client:
dependency: transitive
description:
name: frontend_server_client
sha256: f64a0333a82f30b0cca061bc3d143813a486dc086b574bfb233b7c1372427694
url: "https://pub.dev"
source: hosted
version: "4.0.0"
glob:
dependency: transitive
description:
name: glob
sha256: "0e7014b3b7d4dac1ca4d6114f82bf1782ee86745b9b42a92c9289c23d8a0ab63"
url: "https://pub.dev"
source: hosted
version: "2.1.2"
go_router:
dependency: "direct main"
description:
name: go_router
sha256: "6f1b756f6e863259a99135ff3c95026c3cdca17d10ebef2bba2261a25ddc8bbc"
url: "https://pub.dev"
source: hosted
version: "14.3.0"
go_router_builder:
dependency: "direct dev"
description:
name: go_router_builder
sha256: "3425b72dea69209754ac6b71b4da34165dcd4d4a2934713029945709a246427a"
url: "https://pub.dev"
source: hosted
version: "2.7.1"
graphs:
dependency: transitive
description:
name: graphs
sha256: "741bbf84165310a68ff28fe9e727332eef1407342fca52759cb21ad8177bb8d0"
url: "https://pub.dev"
source: hosted
version: "2.3.2"
http_multi_server:
dependency: transitive
description:
name: http_multi_server
sha256: "97486f20f9c2f7be8f514851703d0119c3596d14ea63227af6f7a481ef2b2f8b"
url: "https://pub.dev"
source: hosted
version: "3.2.1"
http_parser:
dependency: transitive
description:
name: http_parser
sha256: "2aa08ce0341cc9b354a498388e30986515406668dbcc4f7c950c3e715496693b"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
io:
dependency: transitive
description:
name: io
sha256: "2ec25704aba361659e10e3e5f5d672068d332fc8ac516421d483a11e5cbd061e"
url: "https://pub.dev"
source: hosted
version: "1.0.4"
js:
dependency: transitive
description:
name: js
sha256: c1b2e9b5ea78c45e1a0788d29606ba27dc5f71f019f32ca5140f61ef071838cf
url: "https://pub.dev"
source: hosted
version: "0.7.1"
json_annotation:
dependency: transitive
description:
name: json_annotation
sha256: "1ce844379ca14835a50d2f019a3099f419082cfdd231cd86a142af94dd5c6bb1"
url: "https://pub.dev"
source: hosted
version: "4.9.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
logging:
dependency: transitive
description:
name: logging
sha256: c8245ada5f1717ed44271ed1c26b8ce85ca3228fd2ffdb75468ab01979309d61
url: "https://pub.dev"
source: hosted
version: "1.3.0"
macros:
dependency: transitive
description:
name: macros
sha256: "0acaed5d6b7eab89f63350bccd82119e6c602df0f391260d0e32b5e23db79536"
url: "https://pub.dev"
source: hosted
version: "0.1.2-main.4"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
mime:
dependency: transitive
description:
name: mime
sha256: "41a20518f0cb1256669420fdba0cd90d21561e560ac240f26ef8322e45bb7ed6"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
package_config:
dependency: transitive
description:
name: package_config
sha256: "1c5b77ccc91e4823a5af61ee74e6b972db1ef98c2ff5a18d3161c982a55448bd"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
pool:
dependency: transitive
description:
name: pool
sha256: "20fe868b6314b322ea036ba325e6fc0711a22948856475e2c2b6306e8ab39c2a"
url: "https://pub.dev"
source: hosted
version: "1.5.1"
pub_semver:
dependency: transitive
description:
name: pub_semver
sha256: "40d3ab1bbd474c4c2328c91e3a7df8c6dd629b79ece4c4bd04bee496a224fb0c"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
pubspec_parse:
dependency: transitive
description:
name: pubspec_parse
sha256: c799b721d79eb6ee6fa56f00c04b472dcd44a30d258fac2174a6ec57302678f8
url: "https://pub.dev"
source: hosted
version: "1.3.0"
shelf:
dependency: transitive
description:
name: shelf
sha256: ad29c505aee705f41a4d8963641f91ac4cee3c8fad5947e033390a7bd8180fa4
url: "https://pub.dev"
source: hosted
version: "1.4.1"
shelf_web_socket:
dependency: transitive
description:
name: shelf_web_socket
sha256: "073c147238594ecd0d193f3456a5fe91c4b0abbcc68bf5cd95b36c4e194ac611"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_gen:
dependency: transitive
description:
name: source_gen
sha256: "14658ba5f669685cd3d63701d01b31ea748310f7ab854e471962670abcf57832"
url: "https://pub.dev"
source: hosted
version: "1.5.0"
source_helper:
dependency: transitive
description:
name: source_helper
sha256: "6adebc0006c37dd63fe05bca0a929b99f06402fc95aa35bf36d67f5c06de01fd"
url: "https://pub.dev"
source: hosted
version: "1.3.4"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
stream_transform:
dependency: transitive
description:
name: stream_transform
sha256: "14a00e794c7c11aa145a170587321aedce29769c08d7f58b1d141da75e3b1c6f"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
timing:
dependency: transitive
description:
name: timing
sha256: "70a3b636575d4163c477e6de42f247a23b315ae20e86442bebe32d3cabf61c32"
url: "https://pub.dev"
source: hosted
version: "1.0.1"
typed_data:
dependency: transitive
description:
name: typed_data
sha256: f9049c039ebfeb4cf7a7104a675823cd72dba8297f264b6637062516699fa006
url: "https://pub.dev"
source: hosted
version: "1.4.0"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
watcher:
dependency: transitive
description:
name: watcher
sha256: "3d2ad6751b3c16cf07c7fca317a1413b3f26530319181b37e3b9039b84fc01d8"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web_socket:
dependency: transitive
description:
name: web_socket
sha256: "3c12d96c0c9a4eec095246debcea7b86c0324f22df69893d538fcc6f1b8cce83"
url: "https://pub.dev"
source: hosted
version: "0.1.6"
web_socket_channel:
dependency: transitive
description:
name: web_socket_channel
sha256: "9f187088ed104edd8662ca07af4b124465893caf063ba29758f97af57e61da8f"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
yaml:
dependency: transitive
description:
name: yaml
sha256: "75769501ea3489fca56601ff33454fe45507ea3bfb014161abc3b43ae25989d5"
url: "https://pub.dev"
source: hosted
version: "3.1.2"
sdks:
dart: ">=3.5.3 <4.0.0"
flutter: ">=3.19.0"
```
</details>
### Steps to reproduce
1. Create a GoRouteData class with an enum parameter
2. Use PathUrlStategy
3. Launch web
4. Try to navigate with url to route with invalid parameter
### Expected results
Invalid enum parameters should be handled gracefully. Path parameters should lead to a 404, and query parameters should be ignored.
### Actual results
When using an enum for a query parameter or a required path parameter, entering an invalid value will cause a Bad State Exception. This is usually passable on android/ios or web with hash url stategy, because there will not be any redirection, but doing the same on web with a PathUrlStategy breaks the entire app.
### Code sample
[repo](https://github.com/BuyMyBeard/go_router_builder_enum_bug)
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
import 'package:flutter_web_plugins/url_strategy.dart';
part 'main.g.dart';
void main() {
usePathUrlStrategy();
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: GoRouter(
initialLocation: const MainRoute().location,
routes: $appRoutes,
),
);
}
}
enum QueryParam {
valid,
}
@TypedGoRoute<MainRoute>(path: '/home')
class MainRoute extends GoRouteData {
const MainRoute({this.param});
final QueryParam? param;
@override
Widget build(BuildContext context, GoRouterState state) {
return Scaffold(
body: Center(child: Text('Main page with param: ${param?.name}')),
);
}
}
```
</details>
### Screenshots or Videos
_No response_
### Logs
<details open><summary>Logs</summary>
```console
══╡ EXCEPTION CAUGHT BY WIDGETS LIBRARY ╞═══════════════════════════════════════════════════════════
The following StateError was thrown building DefaultSelectionStyle:
Bad state: No element
The relevant error-causing widget was:
MaterialApp MaterialApp:file:///W:/DevXpress/go_router_builder_enum_bug/lib/main.dart:17:24
When the exception was thrown, this was the stack:
dart-sdk/lib/_internal/js_dev_runtime/private/ddc_runtime/errors.dart 296:3 throw_
dart-sdk/lib/core/iterable.dart 775:9 singleWhere
packages/go_router_builder_enum_bug/main.g.dart 56:64 _extension$351._$36fromName
packages/go_router_builder_enum_bug/main.g.dart 55:23 <fn>
packages/go_router_builder_enum_bug/main.g.dart 51:42 _$36convertMapValue
packages/go_router_builder_enum_bug/main.g.dart 20:16 $36MainRouteExtension$124_fromState
packages/go_router/src/route_data.dart 102:53 factoryImpl
packages/go_router/src/route_data.dart 112:28 redirect
packages/go_router/src/configuration.dart 443:56 [_getRouteLevelRedirect]
packages/go_router/src/configuration.dart 400:13 processTopLevelRedirect
packages/go_router/src/configuration.dart 417:16 processRedirect
packages/go_router/src/configuration.dart 423:14 redirect
packages/go_router/src/parser.dart 164:10 [_redirect]
packages/go_router/src/parser.dart 101:7 parseRouteInformationWithDependencies
packages/flutter/src/widgets/router.dart 746:12 [_processRouteInformation]
packages/flutter/src/widgets/router.dart 616:7 restoreState
packages/flutter/src/widgets/restoration.dart 924:5 [_doRestore]
packages/flutter/src/widgets/restoration.dart 910:7 didChangeDependencies
packages/flutter/src/widgets/router.dart 693:11 didChangeDependencies
packages/flutter/src/widgets/framework.dart 5766:5 [_firstBuild]
packages/flutter/src/widgets/framework.dart 5593:5 mount
packages/flutter/src/widgets/framework.dart 4468:15 inflateWidget
packages/flutter/src/widgets/framework.dart 3963:18 updateChild
packages/flutter/src/widgets/framework.dart 5642:16 performRebuild
packages/flutter/src/widgets/framework.dart 5333:7 rebuild
packages/flutter/src/widgets/framework.dart 5599:5 [_firstBuild]
[...]
════════════════════════════════════════════════════════════════════════════════════════════════════
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4317], locale en-CA)
• Flutter version 3.24.3 on channel stable at W:\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (7 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at W:\Android
• Platform android-35, build-tools 34.0.0
• ANDROID_HOME = W:\Android
• Java binary at: W:\Android\AndroidStudio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.7+0-b2043.56-10550314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Android Studio (version 2023.1)
• Android Studio at W:\Android\AndroidStudio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.7+0-b2043.56-10550314)
[√] VS Code (version 1.94.2)
• VS Code at C:\Users\alexa\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.98.0
[√] Connected device (3 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 14 (API 34) (emulator)
• Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.70
• Edge (web) • edge • web-javascript • Microsoft Edge 128.0.2739.42
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,package,has reproducible steps,P2,p: go_router,p: go_router_builder,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.27 | low | Critical |
2,621,940,808 | pytorch | Major perf regression with `BatchNorm2d` + `torch.compile` with `reduce-overhead` + DDP | ### 🐛 Describe the bug
Since PyTorch 2.5.0, there is a massive (more than 10x) performance regression when using `BatchNorm2d` with `torch.compile` set to `reduce-overhead` and `DistributedDataParallel`. The following warning is also printed multiple times: `skipping cudagraphs due to mutated inputs (27 instances)`. Sometimes, seemingly randomly there's a crash too at the very end.
Performance is good with PyTorch 2.4.1, without any cudagraph warnings or crashes. Replacing `BatchNorm2d` with `GroupNorm` fixes the issue and results in good performance with PyTorch 2.5.1 as well. Without `DistributedDataParallel` there are no issues with `BatchNorm2d` with any tested version of PyTorch. So it seems the issue happens only if `BatchNorm2d` and `DistributedDataParallel` are used together.
Here's a small reproducer (U-Net):
```python
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from tqdm import tqdm
from torch.nn.parallel import DistributedDataParallel as DDP
SHAPE = [16, 3, 256, 256]
TRAIN_STEPS = 100
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, padding='same'):
super().__init__()
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding, bias=False)
#self.norm = nn.GroupNorm(1, out_channels) # no issues
self.norm = nn.BatchNorm2d(out_channels) # skipping cudagraphs due to mutated inputs
def forward(self, x):
return self.norm(self.conv(x)) # relu removed for simplicity
def pool(x):
return F.max_pool2d(x, 2, 2)
def upsample(x):
return F.interpolate(x, scale_factor=2, mode='nearest')
def concat(a, b):
return torch.cat((a, b), 1)
class UNet(nn.Module):
def __init__(self, ic=3, oc=3):
super(UNet, self).__init__()
ec1, ec2, ec3, ec4, ec5, dc4, dc3, dc2, dc1a, dc1b = 32, 48, 64, 80, 96, 112, 96, 64, 64, 32
self.enc_conv0 = ConvBlock(ic, ec1)
self.enc_conv1 = ConvBlock(ec1, ec1)
self.enc_conv2 = ConvBlock(ec1, ec2)
self.enc_conv3 = ConvBlock(ec2, ec3)
self.enc_conv4 = ConvBlock(ec3, ec4)
self.enc_conv5a = ConvBlock(ec4, ec5)
self.enc_conv5b = ConvBlock(ec5, ec5)
self.dec_conv4a = ConvBlock(ec5+ec3, dc4)
self.dec_conv4b = ConvBlock(dc4, dc4)
self.dec_conv3a = ConvBlock(dc4+ec2, dc3)
self.dec_conv3b = ConvBlock(dc3, dc3)
self.dec_conv2a = ConvBlock(dc3+ec1, dc2)
self.dec_conv2b = ConvBlock(dc2, dc2)
self.dec_conv1a = ConvBlock(dc2+ic, dc1a)
self.dec_conv1b = ConvBlock(dc1a, dc1b)
self.dec_conv0 = ConvBlock(dc1b, oc)
def forward(self, input):
x = self.enc_conv0(input)
x = self.enc_conv1(x)
x = pool1 = pool(x)
x = self.enc_conv2(x)
x = pool2 = pool(x)
x = self.enc_conv3(x)
x = pool3 = pool(x)
x = self.enc_conv4(x)
x = pool(x)
x = self.enc_conv5a(x)
x = self.enc_conv5b(x)
x = upsample(x)
x = concat(x, pool3)
x = self.dec_conv4a(x)
x = self.dec_conv4b(x)
x = upsample(x)
x = concat(x, pool2)
x = self.dec_conv3a(x)
x = self.dec_conv3b(x)
x = upsample(x)
x = concat(x, pool1)
x = self.dec_conv2a(x)
x = self.dec_conv2b(x)
x = upsample(x)
x = concat(x, input)
x = self.dec_conv1a(x)
x = self.dec_conv1b(x)
x = self.dec_conv0(x)
return x
def demo_basic():
dist.init_process_group("nccl")
rank = dist.get_rank()
print(f"Start on rank {rank}.")
device_id = rank % torch.cuda.device_count()
model = UNet().to(device_id)
model = torch.compile(model, mode='reduce-overhead')
ddp_model = DDP(model, device_ids=[device_id])
loss_fn = nn.MSELoss()
loss_fn = torch.compile(loss_fn, mode='reduce-overhead')
optimizer = optim.Adam(ddp_model.parameters(), lr=0.001)
for i in tqdm(range(TRAIN_STEPS), disable=(rank != 0)):
torch.compiler.cudagraph_mark_step_begin()
input = torch.randn(SHAPE, dtype=torch.float32, device=device_id)
target = torch.randn(SHAPE, dtype=torch.float32, device=device_id)
optimizer.zero_grad()
output = ddp_model(input)
loss_fn(output, target).backward()
optimizer.step()
dist.destroy_process_group()
if __name__ == "__main__":
demo_basic()
```
Example invocation: `torchrun --nnodes=1 --nproc_per_node=7 --rdzv_id=100 --rdzv_backend=c10d --rdzv_endpoint=localhost:29400 bn_cudagraph_ddp.py`
Output (there isn't always a crash at the end, otherwise the output is the same):
```
torchrun --nnodes=1 --nproc_per_node=7 --rdzv_id=100 --rdzv_backend=c10d --rdzv_endpoint=localhost:29400 bn_cudagraph_ddp.py
W1030 11:53:19.354000 287697 torch/distributed/run.py:793]
W1030 11:53:19.354000 287697 torch/distributed/run.py:793] *****************************************
W1030 11:53:19.354000 287697 torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W1030 11:53:19.354000 287697 torch/distributed/run.py:793] *****************************************
Start on rank 4.
Start on rank 0.
Start on rank 3.
Start on rank 1.
Start on rank 5.
Start on rank 2.
Start on rank 6.
0%| | 0/100 [00:00<?, ?it/s]skipping cudagraphs due to mutated inputs (27 instances)
skipping cudagraphs due to mutated inputs (21 instances)
skipping cudagraphs due to mutated inputs (27 instances)
skipping cudagraphs due to mutated inputs (21 instances)
skipping cudagraphs due to mutated inputs (27 instances)
skipping cudagraphs due to mutated inputs (21 instances)
skipping cudagraphs due to mutated inputs (27 instances)
skipping cudagraphs due to mutated inputs (21 instances)
skipping cudagraphs due to mutated inputs (27 instances)
skipping cudagraphs due to mutated inputs (27 instances)
skipping cudagraphs due to mutated inputs (21 instances)
skipping cudagraphs due to mutated inputs (27 instances)
skipping cudagraphs due to mutated inputs (21 instances)
skipping cudagraphs due to mutated inputs (21 instances)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:44<00:00, 2.26it/s]
[E1030 11:54:10.724317899 ProcessGroupNCCL.cpp:542] [Rank 1] Collective WorkNCCL(SeqNum=405, OpType=ALLREDUCE, NumelIn=608352, NumelOut=608352, Timeout(ms)=600000) raised the following async exception: NCCL error: unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
Last error:
Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:2027 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f0f5f4b9446 in /home/aafra/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::shared_ptr<c10d::NCCLComm>&) + 0x220 (0x7f0f15429f80 in /home/aafra/.venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7c (0x7f0f1542a1cc in /home/aafra/.venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::WorkNCCL::isCompleted() + 0x90 (0x7f0f1542a3e0 in /home/aafra/.venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1da (0x7f0f15431b5a in /home/aafra/.venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #5: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7f0f1543361d in /home/aafra/.venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #6: <unknown function> + 0x145c0 (0x7f0f5f91e5c0 in /home/aafra/.venv/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #7: <unknown function> + 0x94ac3 (0x7f0f60401ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #8: <unknown function> + 0x126850 (0x7f0f60493850 in /lib/x86_64-linux-gnu/libc.so.6)
```
The issue can be reproduced with PyTorch 2.5.0 and 2.5.1 as well.
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX 6000 Ada Generation
GPU 1: NVIDIA RTX 6000 Ada Generation
GPU 2: NVIDIA RTX 6000 Ada Generation
GPU 3: NVIDIA RTX 6000 Ada Generation
GPU 4: NVIDIA RTX 6000 Ada Generation
GPU 5: NVIDIA RTX 6000 Ada Generation
GPU 6: NVIDIA RTX 6000 Ada Generation
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 160
On-line CPU(s) list: 0-159
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 40
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3.8 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 100 MiB (80 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-19,80-99
NUMA node1 CPU(s): 20-39,100-119
NUMA node2 CPU(s): 40-59,120-139
NUMA node3 CPU(s): 60-79,140-159
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1+cu124
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @mcarilli @eellison @penguinwu @chauhang | high priority,oncall: distributed,triaged,module: ddp,module: cuda graphs,oncall: pt2,pt2d-triage-nov2024 | low | Critical |
2,621,941,373 | flutter | [et] start forwarding compilation errors to et from BuildRunner | For context see https://github.com/flutter/engine/pull/56177#discussion_r1820991182 | P3,team-engine,triaged-engine,e: engine-tool | low | Critical |
2,621,945,557 | langchain | Voice Input Support for Ollama Models | ### Discussed in https://github.com/langchain-ai/langchain/discussions/27404
<div type='discussions-op-text'>
<sup>Originally posted by **kodychik** October 16, 2024</sup>
### Checked
- [X] I searched existing ideas and did not find a similar one
- [X] I added a very descriptive title
- [X] I've clearly described the feature request and motivation for it
# Feature request
We (a team of CS students at the University of Toronto) propose that we add voice input support to LangChain's Ollama models.
# Motivation
LangChain currently supports the best models via Ollama integration but lacks the ability to accept voice inputs on these Ollama models. This limitation restricts its use in voice-enabled applications such as virtual assistants, voice-controlled systems, and accessibility tools. This enhancement will enable developers to build applications that can process spoken language, expanding the ways users can interact with LangChain-powered systems.
# Proposal
## Feasibility Analysis
Feasible, it involves:
● Speech-to-Text Conversion: Using a speech recognition engine to transcribe voice
inputs into text that the language model can process.
● Integration with Existing Pipelines: Modifying or extending existing chains to include a
speech-to-text (STT) component before processing inputs with the LLM.
● Modular Implementation: Leveraging LangChain's modular architecture to add this
functionality without significant changes to existing code.
## Outline of Changes
Existing Architecture Overview
LangChain's architecture consists of:
● LLMs (Language Models): Interfaces to language models via Ollama.
● Chains: Sequences of components (e.g., prompt templates, LLMs) that process inputs
and generate outputs.
● Agents: Systems that use LLMs to perform tasks by making decisions and possibly
interacting with tools.
● Retrievers and VectorStores: Components used in Retrieval-Augmented Generation
(RAG) pipelines to fetch relevant information.
## Proposed Solution
Introduce a Speech-to-Text Component that converts voice inputs into text, integrating
seamlessly with existing LangChain chains and agents.
1. User Interaction: User provides voice input via microphone.
2. Speech-to-Text Conversion:
○ The STT component transcribes the voice input into text.
3. Text Processing:
○ The transcribed text is passed to existing LangChain chains or agents.
4. LLM Response:
○ The LLM generates a response based on the input text.
5. Output Delivery:
○ The response is delivered to the user (could be text or converted back to
speech).
## Files to Modify and Create
New Files:
● speech_to_text.py: Implements the SpeechToTextConverter class.
Files to Modify:
● None as existing chains or agents will take text input generated from the STT
component.
## Potential for Innovation:
● Speech from the user is given to the language model to perform engineering by the
model. This prompt engineered output will be given to the Ollama Model chain through
langchain to generate a response. This prevents prompts that are too unstructured and
rambly as speech inputs usually can be.
New Classes and Components
1. SpeechToTextConverter Class
○ Purpose: Converts voice input into text using a speech recognition engine.
○ Key Methods:
■ __init__(engine='whisper', **kwargs): Initializes the speech recognition
engine.
■ convert(audio_input) -> str: Converts audio input to text.
2. VoiceInputChain Class
○ Purpose: A chain that processes voice inputs by integrating the STT component
and passing the text to the LLM.
○ Key Methods:
■ __init__(stt_converter, llm_chain): Initializes with an STT converter and an
existing LLM chain.
■ run(audio_input) -> str: Processes the audio input through the STT
converter and LLM chain.
## Pseudocode Implementation
```
speech_to_text.py
class SpeechToTextConverter:
def __init__(self, engine='whisper', **kwargs):
if engine == 'whisper':
#Initialize Whisper model
self.model = load_whisper_model(**kwargs)
else:
raise NotImplementedError("Only 'whisper' engine is currently supported.")
def convert(self, audio_input) -> str:
#Convert audio to text using the selected engine
text = self.model.transcribe(audio_input)
return text
voice_input_chain.py
class VoiceInputChain(Chain):
def __init__(self, stt_converter, llm_chain):
self.stt_converter = stt_converter
self.llm_chain = llm_chain
def run(self, audio_input) -> str:
#Step 1: Convert voice input to text
text_input = self.stt_converter.convert(audio_input)
#Step 2: Pass text to the LLM chain
response = self.llm_chain.run(text_input)
return response
```
## Implementation Steps
1. Develop the Speech-to-Text Component
○ Implement the SpeechToTextConverter class.
○ Use OpenAI's Whisper model or another suitable STT engine.
○ Allow for future expansion to support other engines.
2. Create the Voice Input Chain
○ Implement the VoiceInputChain class.
○ Integrate the STT converter with an existing LLM chain.
3. Testing
○ Write unit tests for the new components.
○ Test with various audio inputs to ensure accurate transcription and appropriate
LLM responses.
4. Documentation
○ Document new classes, methods, and usage examples.
○ Provide guidelines on setting up dependencies and handling potential issues.
## Example Usage
```
#Import necessary modules
from langchain.llms import Ollama
from langchain.chains import LLMChain
from speech_to_text import SpeechToTextConverter
from voice_input_chain import VoiceInputChain
#Initialize the speech-to-text converter
stt_converter = SpeechToTextConverter(engine='whisper', model_size='base')
#Initialize the LLM chain with Llama 3.1 via Ollama
llm = Ollama(model='llama-3.1')
llm_chain = LLMChain(llm=llm)
#Create the voice input chain
voice_chain = VoiceInputChain(stt_converter=stt_converter, llm_chain=llm_chain)
#Use the chain with an audio file or audio stream
audio_input = 'path/to/audio.wav' # Can be a file path or audio data
response = voice_chain.run(audio_input)
#Output the LLM's response
print(response)
```
# Final Remarks
By implementing this feature:
● We address the growing demand for voice-enabled applications.
● LangChain becomes more versatile, appealing to a broader developer audience.
● The modular design ensures maintainability and ease of future enhancements.</div> | stale | low | Critical |
2,621,946,419 | svelte | [docs] Snippet docs doesn't mention `direct/explicit` method | ### TLDR
https://svelte.dev/docs/svelte/snippet#Passing-snippets-to-components
Would like to add a section to explain the `direct/explicit` method
```ts
let { foo } = $props();
{@render foo()}
```
### Explanation
As far as i can tell there are 3 ways, and the second one `direct/explicit` doesn't seem to be mentioned at all.
Currently is unclear what the various snippet options are in comparison to its counterpart `{#if...}` and `{#each...}`
- direct/implcicit
```ts
// Parent.svelte
<Child>
<h1>Title!</h1>
</Child>
// Child.svelte
<script>
let { children } = $props();
</script>
{@render children()}
```
- direct/explicit
```ts
// Parent.svelte
<Child>
{#snippet foo()}
<h1>Title!</h1>
{/snippet}
</Child>
// Child.svelte
<script>
let { foo } = $props();
</script>
{@render foo()}
```
- indirect/explicit
```ts
// Parent.svelte
{#snippet foo()}
<h1>Title!</h1>
{/snippet}
<Child {foo} />
// Child.svelte
<script>
let { foo } = $props();
</script>
{@render foo()}
```
### Reference
https://discord.com/channels/457912077277855764/1300850418728960000
### Severity
annoyance | documentation | low | Minor |
2,621,949,636 | yt-dlp | Add support for Spinitron.com | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
North America
### Example URLs
Single episode: https://spinitron.com/KPOV/pl/19695954/Calling-All-Cowboys
List of links to archived episodes: https://spinitron.com/KPOV/show/11860/Calling-All-Cowboys
A station's main page: https://spinitron.com/KZSC/
### Provide a description that is worded well enough to be understood
I'm requesting support for this site that radio stations use to archive episodes.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['https://spinitron.com/KPOV/pl/19695954/Calling-All-Cowboys', '--format', 'bestaudio', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.22 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://spinitron.com/KPOV/pl/19695954/Calling-All-Cowboys
[generic] Calling-All-Cowboys: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] Calling-All-Cowboys: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://spinitron.com/KPOV/pl/19695954/Calling-All-Cowboys
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1625, in wrapper
File "yt_dlp\YoutubeDL.py", line 1760, in __extract_info
File "yt_dlp\extractor\common.py", line 741, in extract
File "yt_dlp\extractor\generic.py", line 2533, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://spinitron.com/KPOV/pl/19695954/Calling-All-Cowboys
```
| site-request,triage | low | Critical |
2,621,968,417 | electron | Drag & dropping a file from file explorer SEGFAULTs on Linux | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.0.2
### What operating system(s) are you using?
Ubuntu
### Operating System Version
Linux 6.8.0-47-generic #47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 21:40:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux (Linux Mint 22 Cinnnamon for us mere mortals)
### What arch are you using?
x64
### Last Known Working Electron version
I don't know would like to develop a new feature
### Expected Behavior
When drag & dropping a file from file explorer on Linux Mint the Electron app should not crash.
### Actual Behavior
After drag & dropping a file from file explorer on Linux Mint the Electron app SEGFAULTs.
### Testcase Gist URL
_No response_
### Additional Information
I have created a repro in https://github.com/miikaah/electron-file-drop-crash. A lot more details there.
I think this might be upstream in V8. It is somewhat undeterministic but not really. Sometimes it just takes a while to crash. I can create a 99,9999999 % failure case if this repro is not enough.
What happens is that after dropping a file to the app it works somewhere between 0 - 60 seconds and then it just goes blank, because it segfaults.
The error output is something like this:
```sh
Received signal 11 SEGV_MAPERR 000000000008
#0 0x57b1d22cb24a (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x60fc249)
#1 0x57b1d22db709 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x610c708)
#2 0x7754c9a45320 (/usr/lib/x86_64-linux-gnu/libc.so.6+0x4531f)
#3 0x57b1cfcc2021 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x3af3020)
#4 0x57b1cfdbef08 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x3beff07)
#5 0x57b1cfd3be93 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x3b6ce92)
#6 0x57b1cfd3b3a4 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x3b6c3a3)
#7 0x57b1cfd50689 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x3b81688)
#8 0x57b1cfd5055f (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x3b8155e)
#9 0x57b1d05951ab (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x43c61aa)
#10 0x57b1cfd37955 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x3b68954)
#11 0x57b1cfd968f4 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x3bc78f3)
#12 0x57b1d226e36f (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x609f36e)
#13 0x57b1d228ebf2 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x60bfbf1)
#14 0x57b1d22269c7 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x60579c6)
#15 0x57b1d228f371 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x60c0370)
#16 0x57b1d224e79e (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x607f79d)
#17 0x57b1d44fc99c (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x832d99b)
#18 0x57b1ce9edad4 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x281ead3)
#19 0x57b1ce9ee2f8 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x281f2f7)
#20 0x57b1ce9ef3a6 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x28203a5)
#21 0x57b1ce9ed03a (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x281e039)
#22 0x57b1ce9ed120 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x281e11f)
#23 0x57b1ce69bf97 (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x24ccf96)
#24 0x7754c9a2a1ca (/usr/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9)
#25 0x7754c9a2a28b (/usr/lib/x86_64-linux-gnu/libc.so.6+0x2a28a)
#26 0x57b1ce27802a (/home/miika/repos/electron-file-drop-crash/node_modules/electron/dist/electron+0x20a9029)
r8: 00000000000006b5 r9: 0000000000000004 r10: 0000000080022018 r11: 0000000000000000
r12: 00001b940198fc00 r13: 00001b94019905f0 r14: 00000000000009b0 r15: 00000000000013b0
di: 0000000000000000 si: 00007ffc7f579740 bp: 00007ffc7f579710 bx: 0000000000000000
dx: 000077546b000000 ax: 00001e4400413659 cx: 00001b94019905f0 sp: 00007ffc7f579700
ip: 000057b1d1a24855 efl: 0000000000010246 cgf: 002b000000000033 erf: 0000000000000004
trp: 000000000000000e msk: 0000000000000000 cr2: 0000000000000008
[end of stack trace]
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall nr=0x25 arg1=0x5 arg2=0x7ffc7f578230 arg3=0x0 arg4=0x8
Renderer process crashed
```

| platform/linux,crash :boom:,bug :beetle:,33-x-y,34-x-y | low | Critical |
2,621,969,499 | ui | [bug]: Add webkit optimizations to global CSS configuration | ### Describe the bug
## Overview
The current global CSS configuration would benefit from additional webkit-specific optimizations to enhance cross-browser compatibility, mobile device support, and overall rendering quality.
## Suggested Changes
Add the following webkit and cross-browser optimizations to the base layer of the global CSS configuration:
```css
@layer base {
* {
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
text-rendering: optimizeLegibility;
-webkit-tap-highlight-color: transparent;
}
body {
-webkit-overflow-scrolling: touch;
text-size-adjust: 100%;
-webkit-text-size-adjust: 100%;
}
input, textarea, button {
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
}
}
```
## Benefits
1. **Improved Text Rendering**
- Better font smoothing across browsers
- Optimized legibility for all text elements
2. **Enhanced Mobile Experience**
- Proper text sizing on mobile devices
- Improved touch scrolling behavior
- Removed default tap highlights
- Better form element appearance on iOS
3. **Better Cross-Browser Consistency**
- Normalized appearance across different browsers
- Consistent form element styling
## Implementation
This can be added to the existing `globals.css` template that's generated when initializing a new project. These changes are purely additive and won't affect existing functionality.
## Additional Context
These optimizations are commonly used in production applications and follow best practices for cross-browser compatibility. They address common issues with text rendering and mobile device interactions.
## Related Resources
- [Apple's -webkit-font-smoothing documentation](https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/SafariCSSRef/Articles/StandardCSSProperties.html#//apple_ref/doc/uid/TP30001266-_webkit_font_smoothing)
- [MDN text-rendering documentation](https://developer.mozilla.org/en-US/docs/Web/CSS/text-rendering)
### Affected component/components
config
### How to reproduce
:)
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
mobile
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,621,994,894 | flutter | focusColor does not work in InkWell on mobile platforms (Android, iOS) | ### Steps to reproduce
1. launched the application (Code sample)
### Expected results
After the screen is displayed, index 0 should be focusColor
### Actual results
In android, focusColor is not applied, it is displayed after changing the focus or tap.
In windows and web app, everything works correctly.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: ListView.builder(
itemCount: 15,
itemBuilder: (context, index) {
return ListTile(
autofocus: index == 0,
onTap: () {},
focusColor: Colors.red,
title: Text('Item $index'),
);
},
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.19045.5011], locale en-US)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.7.1)
[√] Android Studio (version 2024.1)
[√] IntelliJ IDEA Community Edition (version 2023.3)
[√] VS Code (version 1.94.0)
[√] Connected device (4 available)
[√] Network resources
• No issues found!
```
</details>
| framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.27 | low | Minor |
2,622,045,197 | nvm | nvm install falls back immediately to source tarfile if binary tarfile fails for whatever reason | installing with `nvm install --lts` on Bullseye ARM64 produces unexpected code compilation after the tarfile is downloaded
```
Computing checksum with sha256sum
Checksums matched!
$>./configure --prefix=/home/runner/.nvm/versions/node/v20.18.0 <
Node.js configure: Found Python 3.9.2...
INFO: configure completed successfully
/usr/bin/make -C out BUILDTYPE=Release V=0
g++ -o /home/runner/.nvm/.cache/src/node-v20.18.0/files/out/Release/obj.target/simdutf/deps/simdutf/simdutf.o ../deps/simdutf/simdutf.cpp '-D_GLIBCXX_USE_CXX11_ABI=1' '-DNODE_OPENSSL_CONF_NAME=nodejs_conf' '-DNODE_OPENSSL_HAS_QUIC' '-DICU_NO_USER_DATA_OVERRIDE' '-D__STDC_FORMAT_MACROS' '-DOPENSSL_NO_PINSHARED' '-DOPENSSL_THREADS' -I../deps/simdutf -pthread -Wall -Wextra -Wno-unused-parameter -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++17 -MMD -MF /home/runner/.nvm/.cache/src/node-v20.18.0/files/out/Release/.deps//home/runner/.nvm/.cache/src/node-v20.18.0/files/out/Release/obj.target/simdutf/deps/simdutf/simdutf.o.d.raw -c
...
```
The compilation continues for hours (on a slow machine).
This issue does not occur on Bullseye ARMhf, Bookworm ARMhf, or Bookworm ARM64. There is no code compilation for those systems. | installing node,needs followup | low | Major |
2,622,077,415 | pytorch | [Runtime Error] Build PyTorch with cuda12.2 on Jetson AGX Orin with jetpack 5.1.4 | ### 🐛 Describe the bug
### Environment
```bash
jetpack==5.1.4
cuda==12.2
PyTorch==2.3.0
```
I found that the cuda-12.2 is compatible with jetpack5.x: [CUDA Upgradable Package for Jetson](https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/index.html#cuda-upgradable-package-for-jetson)
I've built PyTorch from source with cuda-12.2. When I typed:
```python
import torch
torch.cuda.get_device_capability()
```
It went:
```bash
File "/home/orin/tools/anaconda3/envs/llmserving_py/lib/python3.10/site-packages/torch/cuda/__init__.py", line 430, in get_device_capability
prop = get_device_properties(device)
File "/home/orin/tools/anaconda3/envs/llmserving_py/lib/python3.10/site-packages/torch/cuda/__init__.py", line 444, in get_device_properties
_lazy_init() # will define _get_device_properties
File "/home/orin/tools/anaconda3/envs/llmserving_py/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
torch._C._cuda_init()
RuntimeError: The NVIDIA driver on your system is too old (found version 11040). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.
```
But what confused me is that the cuda-12.2 seems compatible with jetpack-5.1.4, I really want to know why because I don't want to reflash my jetson env and rebuild the PyTorch anymore, it really kills me. >_<
### Error logs
_No response_
### Minified repro
_No response_
### Versions
I'm using PyTorch v2.3.0 which was built from source.
cc @ptrblck @msaroufim @malfet @snadampal @milpuz01 @puririshi98 @ezyang @chauhang @penguinwu | module: cuda,triaged,module: arm,module: jetson | low | Critical |
2,622,106,352 | go | x/tools/go/packages: Load is inefficient due to heavy contention | See https://github.com/golang/go/issues/70078#issuecomment-2442266268: for the same workload, the go list command sometimes runs very quickly (70ms) and other times waits for a conspicuously round number (1.0s) of elapsed time before it exits. I vaguely recall that this was intentional behavior to prevent file system time quantization problems. Is it always necessary? If so, when and why? Are there ways to avoid it?
@matloob @samthanawalla | NeedsInvestigation | low | Major |
2,622,115,761 | deno | Using NPM-prefixed Import Syntax Breaks Lock File | ## Deno Info
Version: Deno 2.0.3
## Steps to reproduce
File Tree:
```tree
- .vscode
- settings.json
- app.ts
- deno.json
```
Deno.json:
```json
{
"imports": {
"@aws-sdk/client-s3": "npm:@aws-sdk/client-s3@^3.679.0"
}
}
```
1. Add the following to `app.ts`:
```ts
import { S3Client } from '@aws-sdk/client-s3';
console.log(S3Client);
```
2. Run `deno run -A app.ts`:
Console Output: `[class S3Client extends Client]`
3. Update `app.ts`:
```ts
import { S3Client } from 'npm:@aws-sdk/client-s3';
console.log(S3Client);
```
4. Run `deno run -A app.ts`:
Console Output: `error: Could not resolve 'npm:@aws-sdk/client-s3@3.679.0'.`
5. Update `app.ts` to original:
```ts
import { S3Client } from '@aws-sdk/client-s3';
console.log(S3Client);
```
6. Run `deno run -A app.ts`:
Console Output: `error: Could not resolve 'npm:@aws-sdk/client-s3@3.679.0'.`
## Expected Behavior
Both import syntaxes should work.
```
import { S3Client } from '@aws-sdk/client-s3';
import { S3Client } from 'npm:@aws-sdk/client-s3';
```
## Notes
The lock file seems to become “corrupted” once the command in step 4 is run. | bug,node compat,install | low | Critical |
2,622,124,681 | go | runtime: TestLockOSThreadExit failures | ```
#!watchflakes
default <- pkg == "runtime" && test == "TestLockOSThreadExit"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732844890131159425)):
=== RUN TestLockOSThreadExit
proc_test.go:983: /home/swarming/.swarming/w/ir/x/t/go-build3157675313/testprog.exe LockOSThreadMain (36.043549ms): ok
proc_test.go:989: /home/swarming/.swarming/w/ir/x/t/go-build3157675313/testprog.exe LockOSThreadAlt (31.099939ms): ok
proc_test.go:991: want "OK\n", got "error: read /proc/self/task/3378684/status: no such process\n"
--- FAIL: TestLockOSThreadExit (0.07s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,arch-riscv,compiler/runtime | low | Critical |
2,622,124,736 | go | google.golang.org/protobuf: TestIntegration/Go1.23.0/ProtoLegacy failures | ```
#!watchflakes
default <- pkg == "google.golang.org/protobuf" && test == "TestIntegration/Go1.23.0/ProtoLegacy"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732845580536376737)):
=== RUN TestIntegration/Go1.23.0/ProtoLegacy
integration_test.go:137: executing (go1.23.0 test -tags protolegacy ./...): exit status 1
? google.golang.org/protobuf/cmd/protoc-gen-go/internal_gengo [no test files]
ok google.golang.org/protobuf 0.097s
ok google.golang.org/protobuf/cmd/protoc-gen-go 0.109s
ok google.golang.org/protobuf/compiler/protogen 0.096s
ok google.golang.org/protobuf/encoding 0.117s [no tests to run]
ok google.golang.org/protobuf/encoding/protodelim 0.116s
ok google.golang.org/protobuf/encoding/protojson 0.186s
ok google.golang.org/protobuf/encoding/prototext 0.194s
...
--- FAIL: TestHasExtensionNoAlloc (0.02s)
--- FAIL: TestHasExtensionNoAlloc/Eager (0.00s)
extension_test.go:156: proto.HasExtension should not allocate, but allocated 1.00x per run
FAIL
FAIL google.golang.org/protobuf/proto 0.353s
ok google.golang.org/protobuf/reflect/protodesc 0.141s
ok google.golang.org/protobuf/reflect/protorange 0.226s
ok google.golang.org/protobuf/reflect/protoreflect 0.071s
ok google.golang.org/protobuf/reflect/protoregistry 0.122s
ok google.golang.org/protobuf/testing/protocmp 0.185s
ok google.golang.org/protobuf/testing/protopack 0.049s
ok google.golang.org/protobuf/testing/prototest 1.064s
ok google.golang.org/protobuf/types/dynamicpb 0.121s
ok google.golang.org/protobuf/types/known/anypb 0.057s
ok google.golang.org/protobuf/types/known/durationpb 0.011s
ok google.golang.org/protobuf/types/known/fieldmaskpb 0.024s
ok google.golang.org/protobuf/types/known/structpb 0.021s
ok google.golang.org/protobuf/types/known/timestamppb 0.029s
FAIL
--- FAIL: TestIntegration/Go1.23.0/ProtoLegacy (66.75s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,124,784 | go | log/syslog: TestConcurrentReconnect failures | ```
#!watchflakes
default <- pkg == "log/syslog" && test == "TestConcurrentReconnect"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732829194556524977)):
=== RUN TestConcurrentReconnect
syslog_test.go:430: timeout in concurrent reconnect
--- FAIL: TestConcurrentReconnect (0.29s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,126,212 | go | archive/tar: unrecognized failures | ```
#!watchflakes
default <- pkg == "archive/tar" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732750304004293793)):
FAIL archive/tar [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,126,250 | go | cmd/compile/internal/abt: unrecognized failures | ```
#!watchflakes
default <- pkg == "cmd/compile/internal/abt" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732752484775820097)):
FAIL cmd/compile/internal/abt [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,126,339 | go | x/tools/cmd/gonew: Test/quote.txt failures [consistent failure] | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/cmd/gonew" && test == "Test/quote.txt"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732764485200200673)):
=== RUN Test/quote.txt
main_test.go:98: unexpected failure exit
main_test.go:118: wrong stderr: diff want have
--- want
+++ have
@@ -1,1 +1,10 @@
-gonew: initialized my.com/test in ./test
+gonew: go mod download -json example.com/quote@latest: exit status 1
+{
+ "Path": "example.com/quote",
+ "Version": "v1.5.2",
+ "Query": "latest",
+ "Error": "example.com/quote@v1.5.2: read /"file:///opt/golang/swarm/.swarming/w/ir/x/t/Testquote.txt188249249/001/proxy/example.com/quote/@v/v1.5.2.zip/": write /opt/golang/swarm/.swarming/w/ir/x/w/gopath/pkg/mod/cache/download/example.com/quote/@v/v1.5.2.zip291328757.tmp: sendfile: invalid argument",
+ "Info": "/opt/golang/swarm/.swarming/w/ir/x/w/gopath/pkg/mod/cache/download/example.com/quote/@v/v1.5.2.info",
+ "GoMod": "/opt/golang/swarm/.swarming/w/ir/x/w/gopath/pkg/mod/cache/download/example.com/quote/@v/v1.5.2.mod",
+ "GoModSum": "h1:fBZUP3qzh2hlMC+KYCP3wzEFXKFiObko2TCwsxUizdw="
+}
main_test.go:151: missing file out/test/go.mod
main_test.go:151: missing file out/test/quote.go
main_test.go:151: missing file out/test/quote/another.go
--- FAIL: Test/quote.txt (0.05s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,Tools | medium | Critical |
2,622,126,370 | go | cmd/compile: unrecognized failures | ```
#!watchflakes
default <- pkg == "cmd/compile" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732758873422249313)):
FAIL cmd/compile [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,622,126,535 | go | cmd/compile/internal/amd64: unrecognized failures | ```
#!watchflakes
default <- pkg == "cmd/compile/internal/amd64" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732755578646736849)):
FAIL cmd/compile/internal/amd64 [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,622,142,804 | svelte | Effect tracks entire store instead of properties used | ### Describe the bug
If a property of a store is used in an effect, the entire store becomes a dependency.
This is especially an issue when using something like superforms that stores formdata in a single store. Changing one field will cause all effects referencing any field to run.
### Reproduction
https://svelte.dev/playground/ca42f41442a549d1a37ec9c4073cb061?version=5.1.4
### Logs
_No response_
### System Info
```shell
System:
OS: Linux 5.15 Ubuntu 22.04.5 LTS 22.04.5 LTS (Jammy Jellyfish)
CPU: (12) x64 12th Gen Intel(R) Core(TM) i5-12400F
Memory: 8.01 GB / 15.54 GB
Container: Yes
Shell: 5.8.1 - /usr/bin/zsh
Binaries:
Node: 20.18.0 - ~/.nvm/versions/node/v20.18.0/bin/node
Yarn: 1.22.22 - /mnt/c/Users/bayles/AppData/Local/pnpm/yarn
npm: 10.8.2 - ~/.nvm/versions/node/v20.18.0/bin/npm
pnpm: 9.12.3 - /mnt/c/Users/bayles/AppData/Local/pnpm/pnpm
npmPackages:
svelte: ^5.1.4 => 5.1.2
```
### Severity
blocking an upgrade | documentation | low | Critical |
2,622,145,509 | PowerToys | [Peek] Images open blurry | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Peek
### Steps to reproduce
I open an image in Peek.
### ✔️ Expected Behavior
The image should look accurate.
### ❌ Actual Behavior
The image is not in focus. In this case, the text is blurred.
### Other Software
I am attaching a comparison between Peek and FastStone Image Viewer. As well as the original test image.


| Issue-Bug,Needs-Triage | low | Minor |
2,622,158,374 | next.js | `cloneElement` in client component with async server component as children not working | ### Link to the code that reproduces this issue
https://github.com/darthmaim-reproductions/vercel-next.js-72034
### To Reproduce
1. Clone the reproduction
2. `npm i`
3. `npm run dev`
4. Open http://localhost:3000/ and observe error
### Current vs. Expected behavior
#### Current
When using `cloneElement` in a client component, and the children is an async server component, this error is thrown:
> Error: Element type is invalid: expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports.
> Check the render method of `ClientComponent`.
When the server component is not `async` (or a client component), this just works.
Additionally, adding this line to the client component also makes this work:
```ts
if(children.$$typeof === Symbol.for('react.lazy')) { children = use(children._payload); }
```
#### Expected
Since `cloneElement` works for client components and non-async server components, I expected this to work async server components as well.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 21.6.0: Mon Jun 24 00:56:10 PDT 2024; root:xnu-8020.240.18.709.2~1/RELEASE_X86_64
Available memory (MB): 16384
Available CPU cores: 4
Binaries:
Node: 22.8.0
npm: 10.8.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.2-canary.11 // Latest available version is detected (15.0.2-canary.11).
eslint-config-next: N/A
react: 19.0.0-rc-02c0e824-20241028
react-dom: 19.0.0-rc-02c0e824-20241028
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
This might be a bug in react and not in Next.js.
In earlier versions the children was reported as `<Lazy/>` in react-dev-tools and `cloneElement` was working, now it is shown as `{ $$typeof: Symbol(react.lazy) }` (when not using cloneElement to avoid the error). | bug | low | Critical |
2,622,169,697 | opencv | Remove cuBLAS dependency with cuDNN >= 9.0.0 | ### System Information
OpenCV version: 4.10
OS: Windows
Compiler: MSVC
CUDA version: 12.6
cuDNN version: 9.5.1
### Detailed description
Since version 9.0.0 cuDNN no longer depends on the cuBLAS library but on cuBLASLt instead.
https://docs.nvidia.com/deeplearning/cudnn/latest/release-notes.html#cudnn-9-0-0
Hoping to reduce the weight of dll that have to be shipped to perform inference using Cuda I tried to remove `cublas64_12.dll` from the distributed files, but it then fails to execute.
I then tried to get rid of cuBLAS dependancy when compiling OpenCV but enabling `WITH_CUBLAS=OFF` will result in
`> DNN: CUDA backend requires cuBLAS. Please resolve dependency or disable`
Is there a way to get rid of that library with the right compilation flags?
### Steps to reproduce
Compile OpenCV
### Issue submission checklist
- [x] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: gpu/cuda (contrib),category: dnn | low | Minor |
2,622,185,685 | flutter | [a11y][iOS] TabBar does not work with iOS Voice Control | ### Steps to reproduce
1. Activate Voice Control from the settings app.
2. Open the sample code which uses TabBar.
3. Observe that you cannot navigate between tabs using Voice Control because there is no label for non-selected tabs.
b/345133676
### Expected results
You should be able to navigate between tabs using Voice Control.
### Actual results
There is only a single label for the tab bar, rather than individual labels for each tab.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
/// Flutter code sample for [TabBar].
void main() => runApp(const TabBarApp());
class TabBarApp extends StatelessWidget {
const TabBarApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData(useMaterial3: true),
home: const TabBarExample(),
);
}
}
class TabBarExample extends StatelessWidget {
const TabBarExample({super.key});
@override
Widget build(BuildContext context) {
return DefaultTabController(
initialIndex: 1,
length: 3,
child: Scaffold(
appBar: AppBar(
title: const Text('TabBar Sample'),
bottom: const TabBar(
tabs: <Widget>[
Tab(
icon: Icon(Icons.cloud_outlined),
),
Tab(
icon: Icon(Icons.beach_access_sharp),
),
Tab(
icon: Icon(Icons.brightness_5_sharp),
),
],
),
),
body: const TabBarView(
children: <Widget>[
Center(
child: Text("It's cloudy here"),
),
Center(
child: Text("It's rainy here"),
),
Center(
child: Text("It's sunny here"),
),
],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel google3, on macOS 14.7
• Framework revision 2de487308a (0 days ago), 2024-10-29T00:00:00.000
• Engine revision 0c8f0bf4d7
• Dart version 6e55dfe774
```
</details>
| platform-ios,f: material design,a: accessibility,customer: money (g3),has reproducible steps,P1,found in release: 3.24,team-accessibility,triaged-accessibility,found in release: 3.27 | medium | Minor |
2,622,242,784 | go | archive/zip: unrecognized failures | ```
#!watchflakes
default <- pkg == "archive/zip" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732750304004293793)):
FAIL archive/zip [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,242,820 | go | cmd/compile/internal/compare: unrecognized failures | ```
#!watchflakes
default <- pkg == "cmd/compile/internal/compare" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732752484775820097)):
FAIL cmd/compile/internal/compare [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,242,854 | go | encoding/json: unrecognized failures | ```
#!watchflakes
default <- pkg == "encoding/json" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732758873422249313)):
FAIL encoding/json [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,242,906 | go | cmd/compile/internal/base: unrecognized failures | ```
#!watchflakes
default <- pkg == "cmd/compile/internal/base" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732755578646736849)):
FAIL cmd/compile/internal/base [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,263,287 | vscode | Terminal profiles smoke test caused timeout | Build: https://dev.azure.com/monacotools/a6d41577-0fa3-498e-af22-257312ff0545/_build/results?buildId=301927
Changes: https://github.com/Microsoft/vscode/compare/22b0035...f992298
Terminal tests in this build were all very slow:
```
✔ should create a terminal in the editor area by default (117047ms)
Terminal Input
Auto replies
✔ should automatically reply to a custom entry (47570ms)
Terminal Persistence
detach/attach
✔ should support basic reconnection (48150ms)
- should persist buffer content
Terminal Profiles
✔ should launch the default profile
✔ should set the default profile to a contributed one
✔ should use the default contributed profile on panel open and for splitting (31984ms)
✔ should set the default profile (23498ms)
✔ should use the default profile on panel open and for splitting (24980ms)
✔ createWithProfile command should create a terminal with a profile
✔ createWithProfile command should create a terminal with a contributed profile
✔ createWithProfile command should create a split terminal with a profile (29530ms)
✔ createWithProfile command should create a split terminal with a contributed profile (48906ms)
``` | debt,smoke-test-failure,terminal-profiles | low | Major |
2,622,271,766 | rust | ICE with next-solver: ExistentialMismatch | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
trait Service {
type S;
}
trait Framing {
type F;
}
impl Framing for () {
type F = ();
}
trait HttpService<F: Framing>: Service<S = F::F> {}
type BoxService = Box<dyn HttpService<(), S = ()>>;
fn build_server<F: FnOnce() -> BoxService>(_: F) {}
fn make_server<F: Framing>() -> Box<dyn HttpService<F, S = F::F>> {
unimplemented!()
}
fn main() {
build_server(|| make_server())
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (3f1be1ec7 2024-10-28)
binary: rustc
commit-hash: 3f1be1ec7ec3d8e80beb381ee82164a0aa3ca777
commit-date: 2024-10-28
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
### Error output
```
no errors
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at /rustc/3f1be1ec7ec3d8e80beb381ee82164a0aa3ca777/compiler/rustc_next_trait_solver/src/solve/eval_ctxt/canonical.rs:391:86:
called `Result::unwrap()` on an `Err` value: ExistentialMismatch(ExpectedFound { expected: [Binder { value: Trait(HttpService<()>), bound_vars: [] }, Binder { value: Projection(S = ()), bound_vars: [] }, Binder { value: Projection(S = _), bound_vars: [] }], found: [Binder { value: Trait(HttpService<()>), bound_vars: [] }, Binder { value: Projection(S = ()), bound_vars: [] }, Binder { value: Projection(S = ()), bound_vars: [] }] })
stack backtrace:
0: 0x7f08e12a2a3a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h7767b02c5430d02b
1: 0x7f08e1a0444a - core::fmt::write::hd9cf23088539bf85
2: 0x7f08e2ca0251 - std::io::Write::write_fmt::h516df85bf7181425
3: 0x7f08e12a2892 - std::sys::backtrace::BacktraceLock::print::h5907c3622e037863
4: 0x7f08e12a4d96 - std::panicking::default_hook::{{closure}}::h13cd352e0714d74b
5: 0x7f08e12a4be0 - std::panicking::default_hook::ha087b22e6b135389
6: 0x7f08e031d62f - std[6f01353fa805a722]::panicking::update_hook::<alloc[62edfd2f77ae8093]::boxed::Box<rustc_driver_impl[b57e977af7ed3191]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7f08e12a54a8 - std::panicking::rust_panic_with_hook::ha12d21b564771d78
8: 0x7f08e12a527a - std::panicking::begin_panic_handler::{{closure}}::h29f91f01f78dfedc
9: 0x7f08e12a2ee9 - std::sys::backtrace::__rust_end_short_backtrace::h1e39edc6648a642b
10: 0x7f08e12a4f3c - rust_begin_unwind
11: 0x7f08ddce8a50 - core::panicking::panic_fmt::hea12f2402e677eaa
12: 0x7f08de07b6b6 - core::result::unwrap_failed::hb52ad530f4fe86dd
13: 0x7f08e266f29e - <rustc_next_trait_solver[4bfa710c1e5e135a]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[d2c87e7718ceefbc]::solve::delegate::SolverDelegate, rustc_middle[a71cce3b55612100]::ty::context::TyCtxt>>::unify_query_var_values
14: 0x7f08e26740b5 - <rustc_trait_selection[d2c87e7718ceefbc]::solve::inspect::analyse::InspectCandidate>::instantiate_nested_goals_and_opt_impl_args
15: 0x7f08e29339e8 - <rustc_trait_selection[d2c87e7718ceefbc]::solve::fulfill::BestObligation as rustc_trait_selection[d2c87e7718ceefbc]::solve::inspect::analyse::ProofTreeVisitor>::visit_goal
16: 0x7f08e29340fb - <rustc_trait_selection[d2c87e7718ceefbc]::solve::fulfill::BestObligation as rustc_trait_selection[d2c87e7718ceefbc]::solve::inspect::analyse::ProofTreeVisitor>::visit_goal
17: 0x7f08e265c6dd - <rustc_infer[b005096d9ca2d9eb]::infer::InferCtxt as rustc_trait_selection[d2c87e7718ceefbc]::solve::inspect::analyse::ProofTreeInferCtxtExt>::visit_proof_tree_at_depth::<rustc_trait_selection[d2c87e7718ceefbc]::solve::fulfill::BestObligation>
18: 0x7f08e265c506 - rustc_trait_selection[d2c87e7718ceefbc]::solve::fulfill::find_best_leaf_obligation
19: 0x7f08e265a53e - <rustc_trait_selection[d2c87e7718ceefbc]::traits::FulfillmentError as rustc_infer[b005096d9ca2d9eb]::traits::engine::FromSolverError<rustc_trait_selection[d2c87e7718ceefbc]::solve::fulfill::NextSolverError>>::from_solver_error
20: 0x7f08e06040b7 - <rustc_trait_selection[d2c87e7718ceefbc]::solve::fulfill::FulfillmentCtxt<rustc_trait_selection[d2c87e7718ceefbc]::traits::FulfillmentError> as rustc_infer[b005096d9ca2d9eb]::traits::engine::TraitEngine<rustc_trait_selection[d2c87e7718ceefbc]::traits::FulfillmentError>>::select_where_possible
21: 0x7f08e1ce0137 - <rustc_hir_typeck[cbc9bfea74fd92f]::fn_ctxt::FnCtxt>::structurally_resolve_type
22: 0x7f08e269e7b4 - <rustc_hir_typeck[cbc9bfea74fd92f]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
23: 0x7f08e2698e90 - <rustc_hir_typeck[cbc9bfea74fd92f]::fn_ctxt::FnCtxt>::check_block_with_expected
24: 0x7f08e269f32e - <rustc_hir_typeck[cbc9bfea74fd92f]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
25: 0x7f08e1cf79d0 - rustc_hir_typeck[cbc9bfea74fd92f]::check::check_fn
26: 0x7f08e1ced28d - rustc_hir_typeck[cbc9bfea74fd92f]::typeck
27: 0x7f08e1cecb87 - rustc_query_impl[363319b8d14e6982]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[363319b8d14e6982]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[a71cce3b55612100]::query::erase::Erased<[u8; 8usize]>>
28: 0x7f08e1d9e181 - rustc_query_system[4cfb2d0902ef8f12]::query::plumbing::try_execute_query::<rustc_query_impl[363319b8d14e6982]::DynamicConfig<rustc_query_system[4cfb2d0902ef8f12]::query::caches::VecCache<rustc_span[b0d64f4f8585b504]::def_id::LocalDefId, rustc_middle[a71cce3b55612100]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[363319b8d14e6982]::plumbing::QueryCtxt, false>
29: 0x7f08e1d9c7d5 - rustc_query_impl[363319b8d14e6982]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
30: 0x7f08e1d9c45b - <rustc_middle[a71cce3b55612100]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[bffcb7f74fda0573]::check_crate::{closure#4}>::{closure#0}
31: 0x7f08e1d9a346 - rustc_hir_analysis[bffcb7f74fda0573]::check_crate
32: 0x7f08e2347557 - rustc_interface[26b25201e591a79d]::passes::run_required_analyses
33: 0x7f08e28eec1e - rustc_interface[26b25201e591a79d]::passes::analysis
34: 0x7f08e28eebf1 - rustc_query_impl[363319b8d14e6982]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[363319b8d14e6982]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[a71cce3b55612100]::query::erase::Erased<[u8; 1usize]>>
35: 0x7f08e292b1ee - rustc_query_system[4cfb2d0902ef8f12]::query::plumbing::try_execute_query::<rustc_query_impl[363319b8d14e6982]::DynamicConfig<rustc_query_system[4cfb2d0902ef8f12]::query::caches::SingleCache<rustc_middle[a71cce3b55612100]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[363319b8d14e6982]::plumbing::QueryCtxt, false>
36: 0x7f08e292aecf - rustc_query_impl[363319b8d14e6982]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
37: 0x7f08e27f6273 - rustc_interface[26b25201e591a79d]::interface::run_compiler::<core[97274f6297637c56]::result::Result<(), rustc_span[b0d64f4f8585b504]::ErrorGuaranteed>, rustc_driver_impl[b57e977af7ed3191]::run_compiler::{closure#0}>::{closure#1}
38: 0x7f08e2872394 - std[6f01353fa805a722]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[26b25201e591a79d]::util::run_in_thread_with_globals<rustc_interface[26b25201e591a79d]::util::run_in_thread_pool_with_globals<rustc_interface[26b25201e591a79d]::interface::run_compiler<core[97274f6297637c56]::result::Result<(), rustc_span[b0d64f4f8585b504]::ErrorGuaranteed>, rustc_driver_impl[b57e977af7ed3191]::run_compiler::{closure#0}>::{closure#1}, core[97274f6297637c56]::result::Result<(), rustc_span[b0d64f4f8585b504]::ErrorGuaranteed>>::{closure#0}, core[97274f6297637c56]::result::Result<(), rustc_span[b0d64f4f8585b504]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[97274f6297637c56]::result::Result<(), rustc_span[b0d64f4f8585b504]::ErrorGuaranteed>>
39: 0x7f08e28727cd - <<std[6f01353fa805a722]::thread::Builder>::spawn_unchecked_<rustc_interface[26b25201e591a79d]::util::run_in_thread_with_globals<rustc_interface[26b25201e591a79d]::util::run_in_thread_pool_with_globals<rustc_interface[26b25201e591a79d]::interface::run_compiler<core[97274f6297637c56]::result::Result<(), rustc_span[b0d64f4f8585b504]::ErrorGuaranteed>, rustc_driver_impl[b57e977af7ed3191]::run_compiler::{closure#0}>::{closure#1}, core[97274f6297637c56]::result::Result<(), rustc_span[b0d64f4f8585b504]::ErrorGuaranteed>>::{closure#0}, core[97274f6297637c56]::result::Result<(), rustc_span[b0d64f4f8585b504]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[97274f6297637c56]::result::Result<(), rustc_span[b0d64f4f8585b504]::ErrorGuaranteed>>::{closure#1} as core[97274f6297637c56]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
40: 0x7f08e287326b - std::sys::pal::unix::thread::Thread::new::thread_start::hbed294d48a38d6d6
41: 0x7f08e401339d - <unknown>
42: 0x7f08e409849c - <unknown>
43: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/tmp/im/rustc-ice-2024-10-29T20_27_37-1444394.txt` to your bug report
note: compiler flags: -Z next-solver=globally
query stack during panic:
#0 [typeck] type-checking `main`
#1 [analysis] running analysis passes on this crate
end of query stack
```
</p>
</details>
| I-ICE,A-trait-system,T-compiler,C-bug,S-has-mcve,S-bug-has-test,WG-trait-system-refactor | low | Critical |
2,622,274,711 | ui | [bug]: Sidebar seems incompatable with a header navigation | ### Describe the bug
I am building a dashboard and desire to have a header nav bar as well as the sidebar.
Problem:

Ideal:




All examples show the popular use case of a sidebar under a header.
Please let me know if my understanding on this is correct. As i write it may be the case: 1. this pattern may be redundant as the side bar may need to be used for primary navigation. 2. I may need to incorporate the site logo in the head of the sidebar
### Affected component/components
Sidebar
### How to reproduce
1. Set up a new next.js application
2. clear out the default page
3. add full width header with a logo on the far left and a login button on the far right
4. implement the header as presented in the [documentation ](https://ui.shadcn.com/docs/components/sidebar)
The sidebar overlaps the header covering the site logo vs being a in the div below on the left side with the content on the right side all under the header nav.
### Codesandbox/StackBlitz link
https://stackblitz.com/edit/stackblitz-starters-vqqoj5
### Logs
_No response_
### System Info
```bash
Stackblitz on Firefox and Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | area: examples,component: sidebar | medium | Critical |
2,622,293,057 | deno | Compiled executable failed to find modules from custom npm registry containing subpath | Version: Deno 2.0.3
I have encountered this issue while working with a custom npm registry such as [JFrog npm-registry](https://jfrog.com/help/r/jfrog-artifactory-documentation/npm-registry).
While Deno successfully caches, and executes scripts respecting the `NPM_CONFIG_REGISTRY` environment variable, compiling the script into an executable results in an `ERR_MODULE_NOT_FOUND` error for npm modules.
**Steps to reproduce**
the example was taken from the [blog post](https://deno.com/blog/v1.34#deno-compile-supports-npm-packages), I was running Deno 2.0.3 in blank GitHub codespace (Linux), with custom public npm registry mirror:
```
$ export NPM_CONFIG_REGISTRY=http://mirrors.tencent.com/npm/
$ cat main.ts
import { say } from "npm:cowsay@1.5.0";
console.log(say({ text: "Hello from Deno!" }));
$ deno run --allow-read main.ts
__________________
< Hello from Deno! >
------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
$ deno compile --allow-read -o main main.ts
Compile file:///workspaces/codespaces-blank/main.ts to main
$ ./main
error: [ERR_MODULE_NOT_FOUND] Cannot find module 'file:///tmp/deno-compile-main/.deno_compile_node_modules/localhost/cowsay/1.5.0/index.js' imported from 'file:///tmp/deno-compile-main/codespaces-blank/main.ts'
```
Screenshot:
<img src="https://github.com/user-attachments/assets/7e37c55f-519b-492f-8a37-2eb1df952d42" width=600></img>
Additional info:
I found that using custom npm registry like `$ export NPM_CONFIG_REGISTRY=http://registry.npm.taobao.org` would work without issues. This leads me to suspect that the problem may be related to how Deno compile handles npm modules located under a subpath (e.g. `http://custom-registry.com/path/to/npm/`) | bug,compile | low | Critical |
2,622,296,837 | flutter | [engine] scrolling perfomance regression with merged platform/UI thread. | ### Steps to reproduce
1. `flutter channel master`
2. `flutter run --profile`
3. Scroll vigorously up and down
Reproducible on `master` TOT at the moment : 3ed40f003a92487dfa96833e6a1e1bd7f8ceb677 ❌
The `master` commit that caused it : 81e418dd20f784cae63a5eb9bc234b1332f8e925 ❌
Reproducible on `beta` as well 2e2c358c9b14765c90343af9df11e12c5dfc3e6c ❌
The issue got merged here : https://github.com/flutter/flutter/pull/154020
Last good `master` commit : a4b0d973fb196ba86da5c6ae4d51db40a78a0926 ✅
Not reproducible on `stable` `3.24.4` ✅
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: SingleChildScrollView(
physics: const BouncingScrollPhysics(parent: AlwaysScrollableScrollPhysics()),
scrollDirection: Axis.vertical,
child: Stack(
children: [
const SizedBox(
height: 1440,
width: 1000,
),
for (var i = 0; i < 100; i++)
Positioned.fill(
top: i * 20,
child: Align(
alignment: Alignment.topCenter,
child: Container(
color: Colors.red,
height: 15,
child: Center(
child: Text(
'$i',
style: const TextStyle(
color: Colors.white,
fontSize: 11,
),
),
),
),
),
),
],
),
),
),
);
}
}
```
</details>
### Performance profiling on master channel
- [X] The issue still persists on the master channel
### Timeline Traces
<details open><summary>Timeline Traces JSON</summary>
[dart_devtools_2024-10-29_21_26_05.195.json](https://github.com/user-attachments/files/17562727/dart_devtools_2024-10-29_21_26_05.195.json)
</details>
### Video demonstration
<details open>
<summary>Video demonstration</summary>
https://github.com/user-attachments/assets/9c66fca2-762e-40fe-bd2b-0f66bb7593f2
https://github.com/user-attachments/assets/b6fa38ef-e136-4834-9428-ef763e5b9432
https://github.com/user-attachments/assets/27eca780-5428-4aeb-bd27-0dd22d6a7947
</details>
### What target platforms are you seeing this bug on?
iOS
### OS/Browser name and version | Device information
`iOS 18.1 (22B83) | iPhone 12 mini`
### Does the problem occur on emulator/simulator as well as on physical devices?
Unknown
### Is the problem only reproducible with Impeller?
No
### Logs
<details open><summary>Logs</summary>
```console
flutter create scrolling
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel master, 3.27.0-1.0.pre.292, on macOS 15.1 24B82 darwin-arm64, locale en-US)
• Flutter version 3.27.0-1.0.pre.292 on channel master at /Users/nabila/fvm/versions/master
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 3ed40f003a (14 minutes ago), 2024-10-29 15:45:22 -0400
• Engine revision 795b5492f1
• Dart version 3.7.0 (build 3.7.0-78.0.dev)
• DevTools version 2.40.1
```
</details>
| from: performance template,P1,team-engine,triaged-engine | medium | Critical |
2,622,301,324 | excalidraw | [✨] Remove traditional slides sidebar and use frames instead to organize slides on the canvas itself | _note: frames here means frames from the frames tool._
Slides on Excalidraw have the potential to be so much better than conventional PPT or Google Slides.
## Problem
Imho, the current model of having frames on the canvas and organizing slides on a sidebar is fundamentally flawed and leads to a poor UX. With this model, it is simply impossible to organize slides in the same way on the canvas and on the sidebar.
Currently, the way it is handled is far from ideal:
- Adding a new slide on the sidebar adds a frame quite randomly on the canvas
- Adding a frame on the canvas adds a new slide to the bottom of the sidebar
With this model it is not really possible to have a canvas that reflects the order of the slides, or slides that reflect the way things are organized on the canvas. You could chose another model, e.g. to [put frames in a strict position on the canvas depending on slides order](https://github.com/excalidraw/excalidraw/issues/8130), but then you would lose the beautiful freedom that the canvas offers.
## Current state
To that, we need to take into account that the current model is missing quite a lot of important features for efficient slides management:
- Reorganizing/removing slides can only be done one slide at a time -> maybe not so easy fix
- frames cannot be removed without removing content inside -> maybe not so easy fix
- slides cannot be removed multiple at a time -> maybe not so easy fix
- it is not possible to edit the canvas while on presentation mode -> maybe not so easy fix
- frames title grows when de-zooming on the canvas and can take up a lot of space -> easy fix, just make it a fixed size
- frames cannot be hidden which can create a lot of visual noise when editing the content -> easy fix, could be done with zen mode
- probably other things I've overlooked
## Proposal
What I would do instead in the spirit of Excalidraw would be to entirely remove the slides sidebar, but instead make it so that frames are the only way to manage slides. New frames would come with a pre-defined number and changing the order would be just a matter of a right-click on a certain slide or selected slides group. Adding a new slide could be done at the end or "after" another slide with a right click. Logically, adding a slide or changing a slide's number would automatically change the number of the other slides. | UX/UI,Excalidraw+,E+/presentations | low | Minor |
2,622,303,706 | react | [React 19]Can the server support AsyncGenerator function in useActionState? | I saw somewhere that useActionState fn supports the yield method, which will append the yield return data to the state array. Later I couldn't find it anymore.
Business scenario:
An action contains multiple third-party requests, and we need to promptly feedback the current status to the user.
I tried it today, and the code can run, but it reports an error. The server only supports the Promise method.
## Ideal code
```TypeScript
// actions.ts
'use server'
export async function* manyRequestsAction() {
try {
yield { index: 1, message: 'Step1', error: null }
await new Promise((resolve) => setTimeout(resolve, 1000))
yield { index: 2, message: 'Step2', error: null }
await new Promise((resolve) => setTimeout(resolve, 1000))
yield { index: 3, message: 'Step3', error: null }
await new Promise((resolve) => setTimeout(resolve, 1000))
yield { index: 4, message: 'Step4', error: null }
await new Promise((resolve) => setTimeout(resolve, 1000))
yield { index: 5, message: 'Step5', error: null }
await new Promise((resolve) => setTimeout(resolve, 1000))
yield { index: 6, message: 'Step6', error: null }
await new Promise((resolve) => setTimeout(resolve, 1000))
yield { index: 7, message: 'Step7', error: null }
} catch (e) {
console.error(e)
return { index: undefined, message: undefined, error: e.message }
}
}
```
```TypeScript
// page.tsx
'use client'
type PageProps = {}
const Page = ({}: PageProps) => {
const [currentStep, setCurrentStep] = useState<{
index: number | null | undefined
message: string | null | undefined
error: string | null | undefined
}>({ index: 0, message: null, error: null })
const [loading, setLoading] = useState<boolean>(true)
const [state, formAction] = useActionState(manyRequestsAction, undefined)
useEffect(() => {
const stepItr = async () => {
const next = await state?.next()
const value = next?.value
const done = next?.done
if (done) {
return
}
setCurrentStep({ index: value?.index, message: value?.message, error: value?.error })
console.log(value)
stepItr()
}
if (state) {
stepItr()
}
}, [state])
return (
<div>
<p>
Current Step Index: {currentStep?.index || 'No Index...'}
</p>
<p>
Current Step Message: {currentStep?.message || 'No Message...'}
</p>
<p>
Current Step Error: {currentStep?.error || 'No Error...'}
</p>
<button formAction={formAction}>Start</button>
</div>
)
}
export default Page
```
| Resolution: Stale,React 19 | medium | Critical |
2,622,341,612 | svelte | Add support for "rotate" in `flip` | ### Describe the problem
I'm trying to create the "tinder-swipe" effect in my app, and i'm using flip for achieving the animation of moving the cards. Everything works really nicely except for the rotation - which it seems like `flip` does not support:
```ts
export function flip(node, { from, to }, params = {}) {
var style = getComputedStyle(node);
var zoom = get_zoom(node); // https://drafts.csswg.org/css-viewport/#effective-zoom
var transform = style.transform === 'none' ? '' : style.transform;
var [ox, oy] = style.transformOrigin.split(' ').map(parseFloat);
var dsx = from.width / to.width;
var dsy = from.height / to.height;
var dx = (from.left + dsx * ox - (to.left + ox)) / zoom;
var dy = (from.top + dsy * oy - (to.top + oy)) / zoom;
var { delay = 0, duration = (d) => Math.sqrt(d) * 120, easing = cubicOut } = params;
return {
delay,
duration: typeof duration === 'function' ? duration(Math.sqrt(dx * dx + dy * dy)) : duration,
easing,
css: (t, u) => {
var x = u * dx;
var y = u * dy;
var sx = t + u * dsx;
var sy = t + u * dsy;
return `transform: ${transform} scale(${sx}, ${sy}) translate(${x}px, ${y}px);`; // <--- No rotation here
}
};
}
```
### Describe the proposed solution
Support "rotate" in the `flip` directive
### Importance
nice to have | transition/animation | low | Major |
2,622,347,672 | go | bufio: unrecognized failures | ```
#!watchflakes
default <- pkg == "bufio" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732750304004293793)):
FAIL bufio [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,348,953 | pytorch | torch.compile'ing individual linears for torchtitan debug model + FSDP2 leads to errors | ### 🐛 Describe the bug
When I change torchtitan's torch.compile logic to compile individual linear layers instead of transformer blocks, I see an error:
```
File "/home/vasiliy/.conda/envs/pt_nightly_20241006/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 488, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vasiliy/.conda/envs/pt_nightly_20241006/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 667, in inner_fn
outs = compiled_fn(args)
^^^^^^^^^^^^^^^^^
File "/home/vasiliy/.conda/envs/pt_nightly_20241006/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 1611, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vasiliy/.conda/envs/pt_nightly_20241006/lib/python3.11/site-packages/torch/_inductor/utils.py", line 2006, in run
return model(new_inputs)
^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_vasiliy/p5/cp5mcd3xqsqs6vgrv7ph3kaajndwyhfdokm353uqsizihxemnapu.py", line 35, in call
assert_size_stride(primals_1, (256, 256), (256, 1))
AssertionError: expected size 768==256, stride 256==256 at dim=0
```
There is a full repro here: https://github.com/pytorch/torchtitan/pull/661 - you can check out that PR against torchtitan and run the test plan on a machine with at least 2 GPUs to see the error.
Note that this seems similar to https://github.com/pytorch/pytorch/issues/138715, but this repro is on FSDP2.
### Versions
Pytorch version 2.6.0.dev20241023+cu121
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang @ezyang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | oncall: distributed,triaged,module: fsdp,oncall: pt2,module: dynamo,module: guards,pt2d-triage-nov2024 | low | Critical |
2,622,394,386 | godot | [TRACKER] Rendering issues at very long distances (numerical precision) | This is a tracker dedicated to issues experienced when rendering **at very long distances**, i.e. involving very large camera far distance, very large and very far away geometry, lights with very long range, and all kinds of very large numbers in general.
Such scenes are very common in **space settled games**, **open words**, **at-scale planets**, and similar applications.
Godot currently offers a rather low level of support for such applications due to the many places **numerical precision and overflow issues** arise.
Sometimes these issues can be worked around with the double precision build, but many also involve numerical issues in the shaders as well, which makes having double precision in the core engine of little help.
*Note : deep scenes rendering usually comes along with other related requirements like floating origin management, quadtree / octree scene partitioning or resource streaming. These are broader concerns that go beyond rendering and should not be tracked here.*
Feel free to comment on this thread if you've identified or resolved issues not yet mentioned below !
## Issues classification
🌌 : **happens at galactic scale (numbers > ~`1e+19`m)**. Most often related to overflows in length calculations and normalizations of 32 bit vectors.
🪐 : **happens at planetary scale (numbers > ~`1e+6`m)**. Most often related to numerical precision issues with near and far planes of 32 bit projection matrices
🏞️ : **happens at walkable distances (numbers > ~`1e+2`m)**. Not deep scene issues strictly speaking, still mentioned because likely related
## Issues list
```[tasklist]
### Scene
- [ ] 🪐 Rendering fails when zfar > ~1e+6 times znear (#99986)
- [ ] 🪐 #55070 (#99986)
- [ ] 🪐 Camera far distance is limited to `1,000,000` in editor (#100896)
```
```[tasklist]
### Lighting
- [ ] 🌌 Very far away point lights get culled (#98641)
```
```[tasklist]
### Shadows
- [ ] 🪐 #81877
- [ ] 🪐 Dual paraboloid texture may have [NaN values](https://github.com/godotengine/godot/blob/7e99e939870fc149e42d2a76b764072f550f4b82/servers/rendering/renderer_rd/shaders/effects/cube_to_dp.glsl#L80-L83) for lights with very long range when [blit from cubemap](https://github.com/godotengine/godot/blob/7e99e939870fc149e42d2a76b764072f550f4b82/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.cpp#L2621-L2623).
- [x] 🪐 #92551 (#100319)
- [x] 🏞️ #96361 (#100319)
```
```[tasklist]
### Shading
- [ ] 🌌 `VIEW` is `NaN` on very far away fragments
- [x] 🪐 #86275
```
```[tasklist]
### Geometry
- [x] 🌌 Very large built-in sphere primitives have wrong normals (#98610)
```
```[tasklist]
### Effects
- [ ] 🌌 #99967
- [ ] 🪐 #42390
- [ ] 🪐 Bokeh DOF blurs the whole screen when zfar > ~1e+6 times znear (#99755)
- [ ] 🪐 No SSS when zfar > ~1e+6 times znear (#99755)
- [ ] 🪐 No SSR when zfar > ~1e+6 times znear (#99693)
``` | discussion,topic:rendering,topic:3d | low | Minor |
2,622,419,551 | rust | ICE with next solver: `errors selecting obligation during MIR typeck: [Ambiguity]` | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
pub fn get_all_files_in_dir<'a>(
) -> core::pin::Pin<Box<dyn ::core::future::Future<Output = impl IntoIterator<Item = u32>> + 'a>> {
Box::pin(async move {
let x = Vec::new().into_iter();
get_all_files_in_dir().await;
x
})
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (3f1be1ec7 2024-10-28)
binary: rustc
commit-hash: 3f1be1ec7ec3d8e80beb381ee82164a0aa3ca777
commit-date: 2024-10-28
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
### Error output
```
no error
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: errors selecting obligation during MIR typeck: [Ambiguity]
|
= note: delayed at /rustc/3f1be1ec7ec3d8e80beb381ee82164a0aa3ca777/compiler/rustc_trait_selection/src/traits/query/type_op/custom.rs:95:18
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::delayed_bug::<alloc::string::String>
4: <rustc_borrowck::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle::ty::ParamEnvAnd<rustc_middle::traits::query::type_op::ProvePredicate>>
5: <rustc_borrowck::type_check::TypeChecker>::typeck_mir
6: rustc_borrowck::type_check::type_check
7: rustc_borrowck::nll::compute_regions
8: rustc_borrowck::do_mir_borrowck
9: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
10: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
11: rustc_query_impl::query_impl::mir_borrowck::get_query_non_incr::__rust_end_short_backtrace
12: rustc_middle::query::plumbing::query_get_at::<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>>
13: <rustc_borrowck::type_check::TypeChecker>::prove_closure_bounds
14: <rustc_borrowck::type_check::TypeChecker>::typeck_mir
15: rustc_borrowck::type_check::type_check
16: rustc_borrowck::nll::compute_regions
17: rustc_borrowck::do_mir_borrowck
18: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
19: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
20: rustc_query_impl::query_impl::mir_borrowck::get_query_non_incr::__rust_end_short_backtrace
21: rustc_middle::query::plumbing::query_get_at::<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>>
22: rustc_hir_analysis::collect::type_of::type_of_opaque
23: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::type_of_opaque::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
24: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::DefIdCache<rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
25: rustc_query_impl::query_impl::type_of_opaque::get_query_non_incr::__rust_end_short_backtrace
26: rustc_middle::query::plumbing::query_get_at::<rustc_query_system::query::caches::DefIdCache<rustc_middle::query::erase::Erased<[u8; 8]>>>
27: rustc_hir_analysis::collect::type_of::type_of
28: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::type_of::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
29: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::DefIdCache<rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
30: rustc_query_impl::query_impl::type_of::get_query_non_incr::__rust_end_short_backtrace
31: rustc_middle::query::plumbing::query_get_at::<rustc_query_system::query::caches::DefIdCache<rustc_middle::query::erase::Erased<[u8; 8]>>>
32: rustc_hir_analysis::check::check::check_item_type
33: rustc_hir_analysis::check::wfcheck::check_well_formed
34: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
35: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
36: rustc_query_impl::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
37: rustc_middle::query::plumbing::query_ensure_error_guaranteed::<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 1]>>, ()>
38: rustc_hir_analysis::check::wfcheck::check_mod_type_wf
39: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
40: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::DefaultCache<rustc_span::def_id::LocalModDefId, rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
41: rustc_query_impl::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
42: rustc_hir_analysis::check_crate
43: rustc_interface::passes::run_required_analyses
44: rustc_interface::passes::analysis
45: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
46: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
47: rustc_query_impl::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
48: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
49: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
50: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
51: std::sys::pal::unix::thread::Thread::new::thread_start
52: <unknown>
53: <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/tmp/im/o/rustc-ice-2024-10-29T22_05_37-3264744.txt` to your bug report
note: compiler flags: -Z next-solver=globally --crate-type lib
query stack during panic:
end of query stack
```
</p>
</details>
| I-ICE,T-compiler,C-bug,S-bug-has-test,WG-trait-system-refactor | low | Critical |
2,622,430,575 | vscode | Give more control over syncing rapidly updating documents with extensions | From #230326
I'd like a way to better control extension synchronization for text models that are rapidly updated
My original motivating use case is for chat code blocks while responses are streaming in. If we update these during streaming, extensions will see the document being updated many times a second. The document may be in an incomplete state during this and it may also cause the extension to do extra work that is immediately discarded by the next sync | feature-request,api,inline-chat | low | Minor |
2,622,433,644 | go | make.bash: unrecognized failures | ```
#!watchflakes
default <- pkg == "make.bash" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732713459009767409)):
Building Go cmd/dist using /opt/golang/swarm/.swarming/w/ir/cache/tools/go_bootstrap. (go1.22.6 solaris/amd64)
Building Go toolchain1 using /opt/golang/swarm/.swarming/w/ir/cache/tools/go_bootstrap.
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.
Building Go toolchain2 using go_bootstrap and Go toolchain1.
Building Go toolchain3 using go_bootstrap and Go toolchain2.
Building packages and commands for solaris/amd64.
HASH[moduleIndex]
HASH[moduleIndex]: "devel 0c934b5645c3220de21a5733c60c81e46d06d4e3"
HASH[moduleIndex]: "modroot /opt/golang/swarm/.swarming/w/ir/x/w/goroot/src/cmd\n"
HASH[moduleIndex]: "package devel 0c934b5645c3220de21a5733c60c81e46d06d4e3 X:nocoverageredesign,noaliastypeparams go index v2 /opt/golang/swarm/.swarming/w/ir/x/w/goroot/src/cmd/link\n"
...
HASH[moduleIndex]: "file testing_windows.go 2024-10-29 22:08:04.212868005 +0100 CET 1952\n"
HASH[moduleIndex]: "file testing_windows_test.go 2024-10-29 22:08:04.212940862 +0100 CET 490\n"
HASH[moduleIndex]: 0d196e378d2537bf0bcc59e6fb34e11d15a91f802ebccfd80fb53e7c50b766d0
cmd/link true
go tool dist: unexpected stale targets reported by /opt/golang/swarm/.swarming/w/ir/x/w/goroot/pkg/tool/solaris_amd64/go_bootstrap list -gcflags="" -ldflags="" for [cmd/asm cmd/cgo cmd/compile cmd/link cmd/preprofile] (consider rerunning with GOMAXPROCS=1 GODEBUG=gocachehash=1):
STALE cmd/asm: stale dependency: internal/goarch
STALE cmd/cgo: stale dependency: internal/goarch
STALE cmd/compile: stale dependency: internal/goarch
STALE cmd/link: stale dependency: internal/goarch
STALE cmd/preprofile: stale dependency: internal/goarch
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,622,471,820 | flutter | [web] Flutter web can't handle `0xFE0F` (Unicode Variation Selectors) | While testing some Emoji rendering (using `package:emoji`) I noticed that in some frames, we fire a "Could not find a set of Noto fonts to display all missing characters":
<img width="1374" alt="Screenshot 2024-10-28 at 5 48 10 PM" src="https://github.com/user-attachments/assets/9f10c182-3030-46bd-962c-405764be05e3">
The problem is that some emoji contain a `0xFE0F Variation Selector`[^1] that Flutter web is not handling correctly:
<img width="1488" alt="Screenshot 2024-10-28 at 5 57 12 PM" src="https://github.com/user-attachments/assets/b6d86222-c9c1-49b3-a19d-e43468940e35">
The "keycap" emoji series, for example: #️⃣ (`0x0023, 0xFE0F, 0x20E3`) look completely broken because they contain `0xFE0F` in the middle of the sequence:
| Actual | Expected |
|--------|----------|
| <img width="183" alt="Screenshot 2024-10-29 at 4 32 03 PM" src="https://github.com/user-attachments/assets/88ad0c04-ccf0-4191-907d-3ba9c0365968"> | <img width="197" alt="Screenshot 2024-10-29 at 4 33 39 PM" src="https://github.com/user-attachments/assets/715434e6-9782-4245-9141-a6e63b34558b"> |
### Impact
As of October 2024 (Emoji v16.0), there's ~220 emoji in the basic set[^2], and ~1000 in the ZWJ Sequences[^3] that contain `0xFE0F` and are affected by this bug. Those that contain `0xFE0F` in the middle of their sequence will render as separate emojis, instead of a single, combined one.
This is probably affecting other Unicode symbols, because the Variation Selectors are used across other parts of Unicode. From the wikipedia link:
> As of Unicode 13.0:
>
> * [CJK compatibility ideograph](https://en.wikipedia.org/wiki/CJK_Compatibility_Ideographs) variation sequences contain VS1–VS3 (U+FE00–U+FE02)
> * [CJK Unified Ideographs Extension A](https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_Extension_A) and [B](https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_Extension_B) variation sequences contain VS1 (U+FE00) and VS2 (U+FE01)
> * **Emoji variation sequences contain VS16 (U+FE0F) for emoji-style (with color) <sup><sub>(⬅ this bug!)</sub></sup>** or VS15 (U+FE0E) for text style (monochrome)
> * [Basic Latin](https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)), [Halfwidth and Fullwidth Forms](https://en.wikipedia.org/wiki/Halfwidth_and_fullwidth_forms), [Manichaean](https://en.wikipedia.org/wiki/Manichaean_(Unicode_block)), [Myanmar](https://en.wikipedia.org/wiki/Myanmar_(Unicode_block)), [Myanmar Extended-A](https://en.wikipedia.org/wiki/Myanmar_Extended-A), [Phags-pa](https://en.wikipedia.org/wiki/Phags-pa_(Unicode_block)), and mathematical variation sequences contain only VS1 (U+FE00)
> * [Egyptian Hieroglyphs](https://en.wikipedia.org/wiki/Egyptian_Hieroglyphs_(Unicode_block)#Standardized_variants) variation sequences VS1–VS4 and VS7 (U+FE00–FE03, and FE06) are used to rotate specific signs
VS5, VS6, and VS8–VS14 (U+FE04, FE05, and FE07–FE0D) are not used for any variation sequences
[^1]: https://en.wikipedia.org/wiki/Variation_Selectors_(Unicode_block)
[^2]: https://unicode.org/Public/emoji/latest/emoji-sequences.txt
[^3]: https://unicode.org/Public/emoji/latest/emoji-zwj-sequences.txt
### Reproduction
Add any emoji that contains `0xfe0f`, for example `keycap: \x{23}` :
```dart
Text(
String.fromCharCodes(<int>[0x0023, 0xFE0F, 0x20E3]), // keycap: \x{23}
),
```
Observe that the emoji looks broken, and there's the following warning on the JS console:
<details>
<summary>⚠️ Could not find a set of Noto fonts to display all missing characters. Please add a font asset for the missing characters. See: https://flutter.dev/docs/cookbook/design/fonts</summary>
```
...
findFontsForMissingCodePoints @ dart_sdk.js:162047
[_ensureFallbackFonts] @ dart_sdk.js:161989
...
addMissingCodePoints @ dart_sdk.js:161960
ensureFontsSupportText @ dart_sdk.js:161953
addText @ dart_sdk.js:159365
build @ placeholder_span.dart.js:1782
[_createParagraph] @ placeholder_span.dart.js:3362
...
```
</details>
---
<details>
<summary>Full repro <tt>main.dart</tt></summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
title: 'Flutter Demo',
home: EmojiRepro(),
);
}
}
class EmojiRepro extends StatelessWidget {
const EmojiRepro({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
// Causes a warning in the console, and emoji looks broken!
Text(
String.fromCharCodes(<int>[0x0023, 0xFE0F, 0x20E3]), // keycap: \x{23}
style: Theme.of(context).textTheme.displayLarge,
),
],
),
),
);
}
}
```
</details>
(Initially seen in #157763)
| engine,a: typography,platform-web,e: web_canvaskit,has reproducible steps,P2,e: web_skwasm,team-web,triaged-web | low | Critical |
2,622,473,158 | rust | Allow `Waker::from` to accept trait object `Arc<dyn Wake>` | I don't know it this issue is duplicate.
Consider this [code snippet](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=11021135988246bbd948960403c9d7f4)
```rust
use std::{sync::Arc, task::{Wake, Waker}};
trait Foo: Send + Sync {}
impl Wake for dyn Foo {
fn wake(self: Arc<Self>) {
todo!()
}
}
struct Object;
impl Foo for Object {}
fn main() {
let foo = Arc::new(Object) as Arc<dyn Foo>;
Waker::from(foo);
}
```
Currently this code doesn't work, Because `Waker::from` cannot be used with `Arc<dyn Wake>`
https://github.com/rust-lang/rust/blob/1e4f10ba6476e48a42a79b9f846a2d9366525b9e/library/alloc/src/task.rs#L109
It would be ideal if `Waker::from` could support `W: ?Sized` objects by relaxing the `Sized` bound on `W`. | T-libs-api,C-feature-request | low | Critical |
2,622,498,020 | pytorch | TORCH_COMPILE_CPROFILE=1 doesn't work on python 3.12 | ### 🐛 Describe the bug
When I run
```
(cd ../torchrec && TORCH_COMPILE_CPROFILE=1 python torchrec/distributed/tests/test_pt2_multiprocess.py --num-features 100)
```
It throws
```
[rank0]: torch._dynamo.exc.InternalTorchDynamoError: ValueError: Another profiling tool is already active
```
But when I downgrade down to 3.10 it all works
```
[rank0]:W1029 15:59:51.617000 1691937 torch/_dynamo/convert_frame.py:405] [0/0] Generated SVG from profile at /tmp/_compile_inner_0_0.svg
```
### Versions
```
(pytorch) [16:12] devgpu006:/home/bobren python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+git289eb60
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_zion_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 77%
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4792.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+git289eb60
[pip3] torchmetrics==1.0.3
[pip3] torchrec==1.1.0a0+496b1ac
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.0 pypi_0 pypi
[conda] mkl-static 2025.0.0 pypi_0 pypi
[conda] numpy 2.1.2 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+git289eb60 dev_0 <develop>
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchrec 1.1.0a0+496b1ac pypi_0 pypi
```
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,622,533,676 | node | Buffer is not transferable in Node 21/22 | ### Version
v21.7.3, v22.11.0
### Platform
```text
Linux nweiz1 6.9.10-1rodete5-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.9.10-1rodete5 (2024-09-04) x86_64 GNU/Linux
```
### Subsystem
worker_threads
### What steps will reproduce the bug?
```js
import {MessageChannel, Worker} from 'worker_threads';
const channel = new MessageChannel();
const buffer = Buffer.from('some text');
channel.port1.postMessage(buffer, [buffer.buffer]);
channel.port2.on('message', (e) => {
console.log(Buffer.from(e).toString());
channel.port1.close();
channel.port2.close();
});
```
This worked in v20 and fails in v21 and v22.
### How often does it reproduce? Is there a required condition?
It always reproduces.
### What is the expected behavior? Why is that the expected behavior?
The buffer should be transferred through the message channel.
### What do you see instead?
```
node:internal/per_context/domexception:53
ErrorCaptureStackTrace(this);
^
DOMException [DataCloneError]: Cannot transfer object of unsupported type.
at new DOMException (node:internal/per_context/domexception:53:5)
at file:///.../test.mjs:5:15
at ModuleJob.run (node:internal/modules/esm/module_job:268:25)
at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:543:26)
at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:116:5)
Node.js v22.11.0
```
### Additional information
I didn't see anything in the v21 or v22 changelogs about buffers becoming untransferable, and transferring it worked in v20, so I'm assuming this is unintentional. If it's intentional, I would have expected at least a deprecation message indicating that this was going to start breaking. | buffer,web-standards | medium | Critical |
2,622,535,164 | material-ui | [material-ui] Difficult to find correct info on theming with Pigment CSS in Vite | ### Related page
https://mui.com/material-ui/experimental-api/pigment-css/#themes
### Kind of issue
Missing information
### Issue description
Docs are great but the Theming section doesn't provide an example for Vite. I've looked around, including at the [examples](https://github.com/mui/material-ui/tree/master/examples/material-ui-pigment-css-vite-ts) and the quick start [video](https://www.youtube.com/watch?v=UVeDpUey5Es), however, these resources either assume the project is using Nextjs or they have conflicting code/information from the [docs themselves](https://github.com/mui/pigment-css?tab=readme-ov-file#start-with-vite). The docs are also insufficient as the pigmentConfig in the docs doesn't actually work as written (e.g. `pigment()`: expects 1 argument but got 0) and the [example further down](https://github.com/mui/pigment-css?tab=readme-ov-file#vite) also doesn't supply enough code to actually run.
Anyway, I finally found the solution I needed buried here in the [mui migrating to pigmentcss guide](https://mui.com/material-ui/migration/migrating-to-pigment-css/#configuring-the-theme). I didn't realize this was the correct implementation though because of all the conflicting information.
Hopefully this can be remedied either by linking to the migration guide or refreshing the other documentation to follow it.
Thanks in advance for any help!
### Context
I'm trying to upgrade my Vite projects to MUI v6 and pigment css. I want the Vite config to run properly and also give me access to my custom theme (or css variables if they are intended to replace the old way of theming, which I'm not 100% clear on).
**Search keywords**: vite, theming, cssvariables | docs,support: docs-feedback,package: pigment-css | low | Minor |
2,622,557,124 | ui | [bug]: Pagination - The requested module /src/components/ui/button.tsx does not provide an export named ButtonProps | ### Describe the bug
Pagination throws an error when attempting to import `ButtonProps` from `button.tsx` in Astro. The error occurs because `ButtonProps` is imported directly as:
```js
import { ButtonProps, buttonVariants } from "@/components/ui/button";
```
While this may work in Next.js or vanilla React (not tested), it throws an error in Astro with React.
To resolve this, update the import statement to:
```js
import type { ButtonProps } from "@/components/ui/button";
import { buttonVariants } from "@/components/ui/button";
```
### Affected component/components
Pagination
### How to reproduce
1. Create a new Astro app.
2. Add Shadcn to the Astro app.
3. Run `npx shadcn@latest add button`.
4. Run `npx shadcn@latest add pagination` (replacing the button is optional).
5. Import Pagination into a React component.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11, Node v23.1.0, Astro 4.16.7, React 18.3.1, Typescript 5.6.3
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,622,561,604 | deno | PEM Base64 Error while using mineflayer NPM Package | Version: Deno 2.0.3
Hello.
I am interested in using Deno to build a Minecraft player bot. I used the minelayer package and the Deno runtime. I wrote simple code that should log in to the player using Microsoft authentication.
I am using Deno running on aarch64 Linux.
I am also running mineflayer version 4.23.0.
I am currently attempting to run this code:
```
import mineflayer from "mineflayer";
console.log("Hello, world!");
const bot = mineflayer.createBot({
host: "[remote ip]",
port: 25565,
username: "[username]",
auth: "microsoft",
version: "1.20.4",
});
bot.on("login", () => {
bot.chat("Hello, world!");
});
```
After authenticating with a Microsoft account. I get this strange error:
```
error: Uncaught (in promise) Error: ASN.1 error: PEM error: PEM Base64 error: invalid Base64 encoding
at Object.publicEncrypt (ext:deno_node/internal/crypto/cipher.ts:221:10)
at sendEncryptionKeyResponse (file:///home/[username]/.cache/deno/npm/registry.npmjs.org/minecraft-protocol/1.50.0/src/client/encrypt.js:49:52)
at onJoinServerResponse (file:///home/[username]/.cache/deno/npm/registry.npmjs.org/minecraft-protocol/1.50.0/src/client/encrypt.js:36:11)
at file:///home/[username]/.cache/deno/npm/registry.npmjs.org/yggdrasil/1.7.0/src/utils.js:73:15
at Object.runMicrotasks (ext:core/01_core.js:672:26)
at processTicksAndRejections (ext:deno_node/_next_tick.ts:57:10)
at runNextTicks (ext:deno_node/_next_tick.ts:74:3)
at eventLoopTick (ext:core/01_core.js:182:21)
```
This code works as intended using the latest version of node js, which is 23.0.0-1.
Steps to reproduce:
1. Install mineflayer package. deno add npm:mineflayer
2. Write above code.
3. Run script and authenticate. deno run main.ts
4. Observe error. | bug,crypto | low | Critical |
2,622,574,426 | pytorch | Please implement batching rule for torch.nn.functional.multi_margin_loss | ### 🚀 The feature, motivation and pitch
I'm working on a metric to measure the efficiency of a neural network. My approach requires computing per-example gradient using multi_margin_loss.
Like evaluating the test accuracy of a model, the more examples used, the better the estimation. I would like to measure the efficiency of a lot of models using their entire validation set, so a speed boost would save so much time.
### Alternatives
In fact, my original idea is really simple, the loss is simply the output with the additive inverse of argmax value:
```
m = torch.ones_like(output)
m[label] = -1
loss = output * m + 2
```
I've tried every way I know (indexing, scatter, one_hot, arange) to do this: `m[label] = -1`. However, I always run into problems with indexing BatchedTensor.
If vmap apply a function over a batch of data, I think a per-example function shouldn't have to deal with BatchedTensor in the first place. I'm new to all of this so I don't know if I made a mistake, or what is needed in this case, maybe the ability to index, or batching rule for scatter?
I don't know which is the solution, but I feel like solving this gives more flexibility than implementing batching rule for any other function.
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345 | triaged,actionable,module: vmap,module: functorch | low | Minor |
2,622,582,521 | pytorch | Tensor.index_reduce produces incorrect result | ### 🐛 Describe the bug
Code to reproduce:
```python
import torch
device = torch.device('cpu')
dtype = torch.bfloat16
n = 512
x = torch.tensor([0, 1], dtype=dtype, device=device).repeat(n // 2)
y = torch.zeros(n, device=device, dtype=torch.int32)
print("x[:10] =", x[:10])
print('index_reduce mean :', torch.zeros(1, dtype=dtype, device=device).index_reduce(0, y, x, 'mean').item())
print('mean :', x.mean().item())
```
Output:
```
x[:10] = tensor([0., 1., 0., 1., 0., 1., 0., 1., 0., 1.], dtype=torch.bfloat16)
index_reduce mean : 1.0
mean : 0.5
```
The correct mean of [0, 1, 0, 1, ..., 0, 1] is 0.5, so the result of `index_reduce` is wrong.
If `n` is 256, the output will be correct.
### Versions
```
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.0 | packaged by Anaconda, Inc. | (main, Oct 2 2023, 17:29:18) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-112-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 176
On-line CPU(s) list: 0-175
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468V
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 44
Socket(s): 2
Stepping: 8
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.1 MiB (88 instances)
L1i cache: 2.8 MiB (88 instances)
L2 cache: 176 MiB (88 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-87
NUMA node1 CPU(s): 88-175
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.5.1+cu121
[pip3] triton==3.1.0
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.5.1+cu121 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | high priority,triaged,module: advanced indexing | low | Critical |
2,622,602,495 | node | FTBFS, version 22.11, gcc 11.2 | On a *Slackware64-15.0* machine, attempting to compile NodeJS **22.11** gets a ways into the compile and then fails. Compiling previous stable releases, e.g. **20.18** and below, works just fine.
On Slackware 15 (the most recent "stable" release, albeit dated), we have `gcc` version **11.2.0**, and `glibc` version **2.33**.
(On a *Slackware64-current* machine, with `gcc` version **14.2.0** and `glibc` version **2.40**, the compile of NodeJS **22.11** works just fine. Alas, we can't use the binary compiled here on the older system due to missing glibc symbols.)
I am not a **c++**/*g++* expert, and am unsure how to interpret these specific compiler errors:
```
../deps/v8/src/compiler/wasm-compiler.cc: In lambda function:
../deps/v8/src/compiler/wasm-compiler.cc:8620:59: error: too many initializers for ‘v8::internal::wasm::WrapperCompilationInfo::<unnamed union>’
8620 | .import_info = {kind, expected_arity, suspend}},
| ^
../deps/v8/src/compiler/wasm-compiler.cc: In function ‘v8::internal::wasm::WasmCompilationResult v8::internal::compiler::CompileWasmImportCallWrapper(v8::internal::wasm::CompilationEnv*, v8::internal::wasm::ImportCallKind, const FunctionSig*, bool, int, v8::internal::wasm::Suspend)’:
../deps/v8/src/compiler/wasm-compiler.cc:8658:76: error: use of ‘v8::internal::compiler::CompileWasmImportCallWrapper(v8::internal::wasm::CompilationEnv*, v8::internal::wasm::ImportCallKind, const FunctionSig*, bool, int, v8::internal::wasm::Suspend)::<lambda()>’ before deduction of ‘auto’
8658 | auto result = v8_flags.turboshaft_wasm_wrappers ? compile_with_turboshaft()
| ~~~~~~~~~~~~~~~~~~~~~~~^~
../deps/v8/src/compiler/wasm-compiler.cc: In lambda function:
../deps/v8/src/compiler/wasm-compiler.cc:8782:63: error: too many initializers for ‘v8::internal::wasm::WrapperCompilationInfo::<unnamed union>’
8782 | .import_info = {kind, expected_arity, suspend}},
| ^
g++ -o /usr/local/tmp/slackbuild/nodejs/node-v22.11.0/out/Release/obj.target/v8_base_without_compiler/deps/v8/src/baseline/baseline.o ../deps/v8/src/baseline/baseline.cc '-D_GLIBCXX_USE_CXX11_ABI=1' '-DNODE_OPENSSL_CONF_NAME=nodejs_conf' '-DICU_NO_USER_DATA_OVERRIDE' '-DV8_GYP_BUILD' '-DV8_TYPED_ARRAY_MAX_SIZE_IN_HEAP=64' '-D__STDC_FORMAT_MACROS' '-DV8_TARGET_ARCH_X64' '-DV8_HAVE_TARGET_OS' '-DV8_TARGET_OS_LINUX' '-DV8_EMBEDDER_STRING="-node.21"' '-DENABLE_DISASSEMBLER' '-DV8_PROMISE_INTERNAL_FIELD_COUNT=1' '-DV8_ENABLE_PRIVATE_MAPPING_FORK_OPTIMIZATION' '-DV8_SHORT_BUILTIN_CALLS' '-DOBJECT_PRINT' '-DV8_INTL_SUPPORT' '-DV8_ATOMIC_OBJECT_FIELD_WRITES' '-DV8_ENABLE_LAZY_SOURCE_POSITIONS' '-DV8_USE_SIPHASH' '-DV8_SHARED_RO_HEAP' '-DNDEBUG' '-DV8_WIN64_UNWINDING_INFO' '-DV8_ENABLE_REGEXP_INTERPRETER_THREADED_DISPATCH' '-DV8_USE_ZLIB' '-DV8_ENABLE_SPARKPLUG' '-DV8_ENABLE_TURBOFAN' '-DV8_ENABLE_WEBASSEMBLY' '-DV8_ENABLE_JAVASCRIPT_PROMISE_HOOKS' '-DV8_ENABLE_CONTINUATION_PRESERVED_EMBEDDER_DATA' '-DV8_ALLOCATION_FOLDING' '-DV8_ALLOCATION_SITE_TRACKING' '-DV8_ADVANCED_BIGINT_ALGORITHMS' '-DICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_STATIC' '-DUCONFIG_NO_SERVICE=1' '-DU_ENABLE_DYLOAD=0' '-DU_STATIC_IMPLEMENTATION=1' '-DU_HAVE_STD_STRING=1' '-DUCONFIG_NO_BREAK_ITERATION=0' -I../deps/v8 -I../deps/v8/include -I/usr/local/tmp/slackbuild/nodejs/node-v22.11.0/out/Release/obj/gen/inspector-generated-output-root -I../deps/v8/third_party/inspector_protocol -I/usr/local/tmp/slackbuild/nodejs/node-v22.11.0/out/Release/obj/gen -I/usr/local/tmp/slackbuild/nodejs/node-v22.11.0/out/Release/obj/gen/generate-bytecode-output-root -I../deps/icu-small/source/i18n -I../deps/icu-small/source/common -I../deps/v8/third_party/zlib -I../deps/v8/third_party/zlib/google -I../deps/v8/third_party/abseil-cpp -I../deps/v8/third_party/fp16/src/include -pthread -Wno-unused-parameter -Wno-strict-overflow -Wno-return-type -Wno-int-in-bool-context -Wno-deprecated -Wno-stringop-overflow -Wno-stringop-overread -Wno-restrict -Wno-array-bounds -Wno-nonnull -Wno-dangling-pointer -flax-vector-conversions -m64 -m64 -O3 -fno-omit-frame-pointer -fdata-sections -ffunction-sections -O3 -fno-rtti -fno-exceptions -fno-strict-aliasing -std=gnu++20 -Wno-invalid-offsetof -MMD -MF /usr/local/tmp/slackbuild/nodejs/node-v22.11.0/out/Release/.deps//usr/local/tmp/slackbuild/nodejs/node-v22.11.0/out/Release/obj.target/v8_base_without_compiler/deps/v8/src/baseline/baseline.o.d.raw -g -O2 -fPIC -march=opteron -c
```
| build | low | Critical |
2,622,631,415 | three.js | Demo of reverse depth buffer | ### Description
There are no demos that exercise Cory's new reversed Z functionality.
### Solution
One idea is forking or enhancing the existing [Logarithmic Depth Buffer](https://threejs.org/examples/#webgl_camera_logarithmicdepthbuffer) demo.
### Alternatives
The logarithmic depth buffer demo is super cool but it might be be more useful to have a demo that exercises reversed Z in combination with other features, like shadow mapping, SSAO, and raycaster picking. This would show that all the pieces still play well together.
### Additional context
https://github.com/mrdoob/three.js/pull/29445
https://github.com/mrdoob/three.js/pull/29579 | Enhancement | low | Minor |
2,622,656,848 | pytorch | ncclInternalError: Internal check failed | ### 🐛 Describe the bug
I met an error when I use torchrun for 4 GPUs training and 'nccl' backend (It runs perfect when I use 'gloo'). The environment is python3.9+pytorch2.3.0+cuda12.1.We tried to use uftrace to capture the DLRM code of 4 GPUs launched by torchrun, the command is as follows:
**torchrun --nproc_per_node=4 ./multi-uftrace.py**
The multi-uftrace.py file content is as follows:
```
import subprocess
try:
result = subprocess.run([
'/mnt/yuanningbai/local/uftrace/bin/uftrace','-e','record',
'/mnt/yuanningbai/dlrm/dlrm_s_pytorch.py', '--mini-batch-size=4','--test-mini-batch-size=16384','--test-num-workers=0',
'--num-batches=1','--data-generation=random','--arch-mlp-bot=512-512-64','--arch-sparse-feature-size=64','--arch-embedding-size=1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000','--num-indices-per-lookup=100',
'--arch-interaction-op=dot','--print-freq=1','--print-time','--use-gpu','--inference-only','--dist-backend=nccl'],
check=True,capture_output=True, text=True)#
except subprocess.CalledProcessError as e:
print("error code :", e.returncode)
print("error info :", e.output)
```
## The error message is as follows:
```
W1029 16:52:13.175227 140626680026112 torch/distributed/run.py:757]
W1029 16:52:13.175227 140626680026112 torch/distributed/run.py:757] *****************************************
W1029 16:52:13.175227 140626680026112 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W1029 16:52:13.175227 140626680026112 torch/distributed/run.py:757] *****************************************
error code : 1
error info : world size: 4, current rank: 1, local rank: 1
error code : 1
error info : world size: 4, current rank: 3, local rank: 3
error code : 1
error info : Running on 4 ranks using nccl backend
fail to enable all_to_all_single primitive: NCCL error in: /mnt/yuanningbai/pytorch-2.3.0/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1970, internal error - please report this issue to the NCCL developers, NCCL version 2.20.5
ncclInternalError: Internal check failed.
Last error:
Error : ring 8 does not loop back to start (1 != 0)
world size: 4, current rank: 0, local rank: 0
Using 1 GPU(s)...
-*-*-*-*-*-*nn.EmbeddingBag-*-*-*-*-*-*
-*-*-*-*-*-*nn.EmbeddingBag-*-*-*-*-*-*
error code : 1
error info : world size: 4, current rank: 2, local rank: 2
```
In order to capture the underlying functions of pytorch, we compile pytorch into the pg version. The above error will occur under 4 GPUs, but not under 2 GPUs. At the same time, we try to compile it into the develop version and it will run correctly. So I would like to ask if there is any solution to prevent such errors under the 4 GPUs of the pg version?
### Versions
GPU: 4 x A100 80G GPU
Driver Version :530.30.02
CUDA Version : 12.1
OS version :Ubuntu 22.04
python :3.9
pytorch :v2.3.0
nccl: v2.20.5
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged,module: nccl | low | Critical |
2,622,660,049 | pytorch | export onnx error with sfft | ### 🐛 Describe the bug
code
``` python
import torch
from torch import nn
import torchaudio
DNN_DATA_TYPE = torch.float32
class DataCov(nn.Module):
def __init__(self):
super(DataCov, self).__init__()
self.transform = nn.Sequential(
torchaudio.transforms.MelSpectrogram(sample_rate=48000, n_fft=1536, hop_length=768, f_min=20, f_max=20000)
)
def forward(self, x1):
return self.transform(x1)
def load_data_cov():
module = DataCov().to(DNN_DATA_TYPE).to('cpu')
module.eval()
return module
def export_data_cov(path='data_cov.onnx', batch_size=1):
data_cov = load_data_cov()
x = torch.randn((batch_size, 1, 12 * 48000), dtype=DNN_DATA_TYPE, device='cpu')
y = data_cov(x)
input_names = ["x"]
output_names = ["output"]
output_path = path
torch.onnx.export(
data_cov,
x,
output_path,
export_params=True,
verbose=True,
training=torch.onnx.TrainingMode.EVAL,
input_names=input_names,
output_names=output_names
)
export_data_cov()
```
return error:
torch.onnx.errors.SymbolicValueError: STFT does not currently support complex types [Caused by the value '73 defined in (%73 : Float(*, *, strides=[577536, 1], requires_grad=0, device=cpu) = onnx::Reshape[allowzero=0](%63, %72), scope: module.DataCov::/torch.nn.modules.container.Sequential::transform/torchaudio.transforms._transforms.MelSpectrogram::transform.0/torchaudio.transforms._transforms.Spectrogram::spectrogram # C:\Users\dell\miniconda3\envs\ai_mos_export_onnx\lib\site-packages\torch\functional.py:703:0
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Reshape'.]
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 专业版 (10.0.19045 64 位)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: N/A
Python version: 3.10.15 | packaged by Anaconda, Inc. | (main, Oct 3 2024, 07:22:19) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i5-12400
Manufacturer: GenuineIntel
Family: 205
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2500
MaxClockSpeed: 2500
L2CacheSize: 7680
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] torch==2.5.1+cpu
[pip3] torchaudio==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] numpy 1.26.3 pypi_0 pypi
[conda] torch 2.5.1+cpu pypi_0 pypi
[conda] torchaudio 2.5.1+cpu pypi_0 pypi
[conda] torchvision 0.20.1+cpu pypi_0 pypi
| module: onnx,triaged | low | Critical |
2,622,686,734 | tensorflow | [TF-2.18] Protoc-related Segmentation Fault on GH200 when Building from Source | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
TF 2.18
### Custom code
No
### OS platform and distribution
Rocky Linux 9.3
### Mobile device
_No response_
### Python version
3.9
### Bazel version
6.5.0
### GCC/compiler version
17.0.6 (LLVM)
### CUDA/cuDNN version
CUDA-12.4.1/cuDNN-8.9.6
### GPU model and memory
Grace Hopper GH200
### Current behavior?
Since there is no pip package for GH200, and Amazon only provides CPU wheel for `aarch64`, we are building `tf-2.18` from src.
```
$ git branch
master
* r2.18
```
Both LLVM and Bazel versions are according to recommendation
```
$ clang --version
clang version 17.0.6
Target: aarch64-unknown-linux-gnu
Thread model: posix
```
```
$ bazel --version
bazel 6.5.0- (@non-git)
```
Bazel build failed with protoc-related segmentation fault.
### Standalone code to reproduce the issue
```shell
./configure
Please specify the location of python. [Default is /scratch/optpar01/.conda/tf2_src/bin/python3]:
Found possible Python library paths:
/scratch/optpar01/.conda/tf2_src/lib/python3.9/site-packages
Please input the desired Python library path to use. Default is [/scratch/optpar01/.conda/tf2_src/lib/python3.9/site-packages]
Do you wish to build TensorFlow with ROCm support? [y/N]:
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Please specify the hermetic CUDA version you want to use or leave empty to use the default version. 12.4.1
Please specify the hermetic cuDNN version you want to use or leave empty to use the default version. 8.9.6
Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Each capability can be specified as "x.y" or "compute_xy" to include both virtual and binary GPU code, or as "sm_xy" to only include the binary code.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 3.5,7.0]: 7.0,8.0,9.0
Please specify the local CUDA path you want to use or leave empty to use the default version.
Please specify the local CUDNN path you want to use or leave empty to use the default version.
Please specify the local NCCL path you want to use or leave empty to use the default version.
Do you want to use clang as CUDA compiler? [Y/n]:
Clang will be used as CUDA compiler.
Please specify clang path that to be used as host compiler. [Default is /scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/llvm-17.0.6-52ysnwfyyhc6tchulc5wlen7h6yc3fju/bin/clang]:
You have Clang 17.0.6 installed.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]: --copt=-Wno-error=unused-command-line-argument
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
Not configuring the WORKSPACE for Android builds.
```
### Relevant log output
```shell
$ bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow --config=cuda --config=cuda_wheel --verbose_failures
ERROR: /scratch/optpar01/.cache/_bazel_optpar01/1b4d76cef27e0cbebd5791dfb99ed3a8/external/local_tsl/tsl/profiler/protobuf/BUILD:36:17: Action external/local_tsl/tsl/profiler/protobuf/profiler_service.grpc.pb.h failed: (Segmentation fault): protoc failed: error executing command (from target @local_tsl//tsl/profiler/protobuf:_profiler_service_cc_grpc_proto_grpc_codegen)
(cd /scratch/optpar01/.cache/_bazel_optpar01/1b4d76cef27e0cbebd5791dfb99ed3a8/execroot/org_tensorflow && \
exec env - \
CLANG_CUDA_COMPILER_PATH=/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/llvm-17.0.6-52ysnwfyyhc6tchulc5wlen7h6yc3fju/bin/clang-17 \
LD_LIBRARY_PATH=/apps/ARM_node/ARM_applications/python/3.12.4/lib:/apps/ARM_node/ARM_applications/python/3.12.4/lib/python3.12:/usr/lib64:/usr/lib64/slurm:/usr/local/lib \
PATH=/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/bazel-6.5.0-es6l5xjprjwauy22iuwlxoj4uztdcxsu/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/zip-3.0-pxcrfxbv4w274tczkoz7t4a54rv3r5de/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/clang-17.0.6/openjdk-11.0.23_9-s6cgcclfuzjttl5cdh3evsp3rrmuuvo6/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/bazel-6.5.0-es6l5xjprjwauy22iuwlxoj4uztdcxsu/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/zip-3.0-pxcrfxbv4w274tczkoz7t4a54rv3r5de/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/bzip2-1.0.8-jvd2b77w6emrwhj2g7mquhr7oteeb2uk/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/clang-17.0.6/openjdk-11.0.23_9-s6cgcclfuzjttl5cdh3evsp3rrmuuvo6/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/llvm-17.0.6-52ysnwfyyhc6tchulc5wlen7h6yc3fju/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/unzip-6.0-fpkt734eot7dakwydys5mgo5yomxwac3/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/curl-8.7.1-ca7uqkpgoorf7yzvajfyab2dfty7eitr/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/llvm-17.0.6-52ysnwfyyhc6tchulc5wlen7h6yc3fju/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/swig-4.1.1-gpwxntk2voqulme22h55ngp5ztgzyg6x/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/pcre2-10.43-bzcntkq3ul5umao7fcke5w4vrlyssbk7/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/lua-5.3.6-lec5qldurljnfjg4jo2fvsekfughzwru/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/unzip-6.0-fpkt734eot7dakwydys5mgo5yomxwac3/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/readline-8.2-47up4u4jjwf4io45zeioeyrh3wwwg4tg/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/curl-8.7.1-ca7uqkpgoorf7yzvajfyab2dfty7eitr/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/openssl-3.3.1-7uxchckf5qrcojfgntn77hvtoxjjkrq6/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/nghttp2-1.62.0-x5y5gbq2qybl4iiqvjv7bflc2nlx35zu/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/hwloc-2.9.3-onlxonz6ibms2b6fxhk7wqcdpo5l5myn/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/ncurses-6.5-fvbf5zfdj2bcwylse6cv6mztrwadhgyv/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-13.3.0/binutils-2.43.1-t3kxibwizkzbb7e3boxm3qkpngjuzzns/bin:/scratch/optpar01/spack/opt/spack/linux-rocky9-neoverse_v2/gcc-11.4.1/zstd-1.5.6-miqeogpdx7z7lcovzh3bsfwr2tzihxw2/bin:/scratch/optpar01/.conda/tf2_src/bin:/apps/ARM_node/ARM_applications/Miniconda/24.5.0/condabin:/apps/ARM_node/ARM_applications/python/3.12.4/bin:/home01/optpar01/.cargo/bin:/scratch/optpar01/spack/bin:/home01/optpar01/apps/build/gv/3.7.4/bin:/apps/applications/htop/3.0.5:/apps/applications/nvtop/1.1.0/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home01/optpar01/bin \
PYTHON_BIN_PATH=/scratch/optpar01/.conda/tf2_src/bin/python3 \
PYTHON_LIB_PATH=/scratch/optpar01/.conda/tf2_src/lib/python3.9/site-packages \
SPACK_LOADED_HASHES=es6l5xjprjwauy22iuwlxoj4uztdcxsu:52ysnwfyyhc6tchulc5wlen7h6yc3fju \
SPACK_PYTHON=/usr/bin/python3 \
SPACK_ROOT=/scratch/optpar01/spack \
TF2_BEHAVIOR=1 \
bazel-out/aarch64-opt-exec-50AE0418/bin/external/com_google_protobuf/protoc '--plugin=protoc-gen-PLUGIN=bazel-out/aarch64-opt-exec-50AE0418/bin/external/com_github_grpc_grpc/src/compiler/grpc_cpp_plugin' '--PLUGIN_out=services_namespace=grpc,generate_mock_code=true:bazel-out/aarch64-opt/bin/external/local_tsl' '--proto_path=external/local_tsl' '--proto_path=external/local_tsl' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/any_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/api_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/source_context_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/type_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/compiler_plugin_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/descriptor_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/duration_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/empty_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/field_mask_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/struct_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/timestamp_proto' '--proto_path=bazel-out/aarch64-opt/bin/external/com_google_protobuf/_virtual_imports/wrappers_proto' '--proto_path=external/local_tsl' '--proto_path=bazel-out/aarch64-opt/bin/external/local_tsl/external/local_tsl' external/local_tsl/tsl/profiler/protobuf/profiler_service.proto)
```
```
| stat:awaiting tensorflower,type:build/install,subtype: ubuntu/linux,TF 2.18 | low | Critical |
2,622,704,028 | PowerToys | Prevent White Flash When Opening Windows in Dark Mode | ### Description of the new feature / enhancement
Would it be possible for PowerToys to eliminate the white flash when opening a window in Dark Mode? For example, when launching Chrome, there’s always a brief white flash before the content appears. Perhaps PowerToys could replace this flash with a black or dark splash screen instead.
Alternatively, the feature could delay displaying a window until it’s fully rendered, similar to how macOS manages window loading. This enhancement would provide a smoother, fully dark-mode experience.
Here is a reddit post about it with video
https://www.reddit.com/r/Windows11/comments/12xbtrb/screen_flashes_white_when_opening_a_window_on/
### Scenario when this would be used?
Everywhere.
### Supporting information
_No response_ | Idea-New PowerToy,Product-Tweak UI Design | low | Major |
2,622,731,593 | terminal | Ctrl or shift click the new tab button does not use the parent directory as working directory | ### Windows Terminal version
1.21.2911.0
### Windows build number
10.0.19045.5011
### Other Software
_No response_
### Steps to reproduce
0. Set default profile CMD (or PS; but steps below are for CMD).
1. Set your CMD profile's "Starting directory" to "Use parent process directory".
3. Open a new window of WT at a non-default directory, say, `C:\files\`.
4. In this window, click the drop down button (V) next to new tab button (+), and shift or ctrl click the CMD option to create a new window or an administrator window.
5. Observe the new window is correctly using `C:\files\` as current working dir, as expected.
6. Back to the original window. Shift or ctrl click the new tab button (+) directly.
### Expected Behavior
It should create a new window or a new administrator window with default profile, which should be CMD, so it should create a window using `C:\files\` as starting dir just like step 4-5.
### Actual Behavior
It creates a window using `C:\WINDOWS\system32` as working dir, not parent process dir. | Issue-Bug,Product-Terminal,Needs-Tag-Fix | low | Minor |
2,622,752,681 | three.js | GLTFLoader: Conflicting mesh/primitive/geometry mappings | ### Description
In glTF's data model, we have:
```yaml
- node: GLTF.Node
- mesh: GLTF.Mesh
- prim: GLTF.MeshPrimitive
- attribute: Record<string, GLTF.Accessor>
- material: GLTF.Material
- prim: GLTF.MeshPrimitive
- attribute: Record<string, GLTF.Accessor>
- material: GLTF.Material
...
```
Note that there is no distinct concept of a "geometry" here. Instead, we look for attributes (collections of named accessors) that happen to contain the same accessors, and cache them...
https://github.com/mrdoob/three.js/blob/09c38ab406fc42c8207559df983fb25766b591f6/examples/jsm/loaders/GLTFLoader.js#L2450-L2456
... so that if other primitives use the same attributes, they refer to the same BufferGeometry and we avoid a duplicate upload. If any attributes differ, the whole BufferGeometry must be duplicated (see #17089).
If (like the example above) there are multiple primitives in the mesh, we get this in three.js...
```yaml
- node: THREE.Object3D
- mesh: THREE.Group
- prim: THREE.Mesh<BufferGeometry, Material>
- prim: THREE.Mesh<BufferGeometry, Material>
```
... and if there were only one primitive in the mesh, we'd drop the THREE.Group and try to "merge" the mesh and primitive concepts, which inherently could lose names or .userData.
***
I noticed today that:
1. glTF mesh primitives may have .extras/userData
2. GLTFLoader assigns a primitive's .extras/userData to a BufferGeometry
3. If the geometry is cached, a primitive may get geometry with the wrong .extras/userData
The userData caching issue isn't urgent; I'm not aware that it's affecting users.
But relatedly (reported in #29753) if a glTF mesh has only one primitive, then GLTFLoader will collapse the primitive and the mesh into one THREE.Mesh object, and the mesh name appears nowhere in the resulting scene.
We could fix the .userData issue just by including .extras/userData in the cache key. May duplicate geometry and raise VRAM cost in rare cases.
To fix that *and* the missing mesh name issue, we would probably want to avoid 'flattening' the scene graph: when a mesh has only one primitive, still return a "Group>Mesh", not just a "Mesh", corresponding to the glTF "Mesh>Prim" pair. Then assign the primitive's .extras/userData to the Mesh, not the BufferGeometry. Arguably makes more sense than assigning .extras/userData to the Geometry, because a glTF primitive has a material and is uniquely mappable to a three.js Mesh, whereas we want to aggressively cache geometries for performance.
### Reproduction steps
1. Load `prim_extras_test.gltf` (attached)
[prim_extras_test.zip](https://github.com/user-attachments/files/17565401/prim_extras_test.zip)
2. Observe that .extras in the glTF file are unique per primitive
```jsonc
"meshes": [
{
"name": "MeshA",
"primitives": [
{
"attributes": {
"POSITION": 0,
"COLOR_0": 1
},
"mode": 0,
"extras": { "data": "PrimA" }
}
]
},
{
"name": "MeshB",
"primitives": [
{
"attributes": {
"POSITION": 0,
"COLOR_0": 1
},
"mode": 0,
"extras": { "data": "PrimB" }
}
]
}
],
```
3. Observe that geometry in the resulting scene is reused for both meshes, so the second .userData goes missing, and that the mesh names occur nowhere in the scene graph (only the parent node's name is found).
```jsonc
mesh.name: NodeA
mesh.userData: {"name":"NodeA"}
mesh.geometry.userData: {"data":"PrimA"}
mesh.name: NodeB
mesh.userData: {"name":"NodeB"}
mesh.geometry.userData: {"data":"PrimA"}
```
The mesh's name is lost because we've flattened the scene graph slightly: if a mesh has more than one primitive, the mesh corresponds to a Group, if the mesh has only one primitive, we skip the Group. I think this might be too complex.
### Code
The model used to test this issue was generated with the glTF Transform script below.
<details>
<summary>script.js</summary>
```javascript
import { NodeIO, Document, Primitive } from '@gltf-transform/core';
const document = new Document();
const buffer = document.createBuffer();
const primA = createPointsPrim(document, buffer).setExtras({ data: 'PrimA' });
const primB = primA.clone().setExtras({ data: 'PrimB' });
const meshA = document.createMesh('MeshA').addPrimitive(primA);
const meshB = document.createMesh('MeshB').addPrimitive(primB);
const nodeA = document.createNode('NodeA').setMesh(meshA).setTranslation([0, 0, 0]);
const nodeB = document.createNode('NodeB').setMesh(meshB).setTranslation([0, 0, 1]);
const scene = document.createScene().addChild(nodeA).addChild(nodeB);
document.getRoot().setDefaultScene(scene);
const io = new NodeIO();
await io.write('./prim_extras_test.gltf', document);
function createPointsPrim(document, buffer) {
const position = document
.createAccessor()
.setType('VEC3')
.setBuffer(buffer)
.setArray(
// prettier-ignore
new Float32Array([
0, 0, 0, // ax,ay,az
0, 0, 1, // bx,by,bz
0, 1, 0, // ...
1, 0, 0,
]),
);
const color = document
.createAccessor()
.setType('VEC4')
.setBuffer(buffer)
.setNormalized(true)
.setArray(
// prettier-ignore
new Uint8Array([
0, 0, 0,
255, 0, 0,
255, 255, 0,
255, 0, 255,
255, 0, 0, 255,
]),
);
return document
.createPrimitive()
.setMode(Primitive.Mode.POINTS)
.setAttribute('POSITION', position)
.setAttribute('COLOR_0', color);
}
```
</details>
### Live example
Open the model attached above in https://threejs.org/editor/.
### Screenshots
_No response_
### Version
r168
### Device
Desktop, Mobile, Headset
### Browser
Chrome, Firefox, Safari, Edge
### OS
Windows, MacOS, Linux, ChromeOS, Android, iOS | Loaders | low | Major |
2,622,788,785 | pytorch | Ban relative imports in test/ | ### 🐛 Describe the bug
If you want to make some library code for tests to share, put it in torch/
### Versions
main
cc @mruberry @ZainRizvi | module: tests,triaged,better-engineering,module: testing | low | Critical |
2,622,809,396 | pytorch | Error compiling the torch.library.custom_op with input mutations with set_ | ### 🐛 Describe the bug
When we try to wrap a simple set_ into custom_op, we found it cannot be compiled.
The following simple case failed with error:
```
import torch
from torch import nn
@torch.library.custom_op("mylib::set_data", mutates_args=["param"])
def set_data(param: torch.Tensor, new_data: torch.Tensor) -> None:
param.set_(new_data) # or param.data = new_data.data
@torch.library.register_fake("mylib::set_data")
def set_data_fake(param: torch.Tensor, new_data: torch.Tensor) -> None:
param.set_(new_data)
return None
class SomeModule(nn.Module):
def __init__(self):
super().__init__()
self.param = nn.Parameter(torch.empty(0))
def forward(self, x):
x = x * 3
y = self.param
set_data(y, x)
return y / 3
module = SomeModule()
module = torch.compile(module)
x = torch.randn(3, 3)
with torch.no_grad():
y = module(x)
print(y)
```
Execute the above program shows exception:
```
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1182, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 693, in inner_fn
outs = fn(*args)
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 612, in _functionalized_f_helper
inpt_old.copy_(inpt_new)
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1230, in __torch_function__
return func(*args, **kwargs)
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_subclasses/functional_tensor.py", line 535, in __torch_dispatch__
outs_unwrapped = func._op_dk(
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_meta_registrations.py", line 397, in meta_copy_
aten.expand_copy.default(intermediate, self.size())
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
return self._op(*args, **kwargs)
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_refs/__init__.py", line 2214, in _fn
result = fn(*args, out=out, **kwargs)
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 273, in _fn
result = fn(*args, **kwargs)
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_ops.py", line 1116, in __call__
return self._op(*args, **(kwargs or {}))
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_refs/__init__.py", line 2970, in expand
torch._check(
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/__init__.py", line 1565, in _check
_check_with(RuntimeError, cond, message)
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/__init__.py", line 1547, in _check_with
raise error_type(message_evaluated)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: expand: the requested shape has too few dimensions!
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
While if I changes "self.param = nn.Parameter(torch.empty(0))" to "self.param = nn.Parameter(torch.empty(3, 3))", it works (but this is not what we need.
Some other variations show different problems. The following example shows the wrong result:
```
import torch
from torch import nn
@torch.library.custom_op("mylib::set_data", mutates_args=["param"])
def set_data(param: torch.Tensor, new_data: torch.Tensor) -> None:
param.set_(new_data)
@torch.library.register_fake("mylib::set_data")
def set_data_fake(param: torch.Tensor, new_data: torch.Tensor) -> None:
param.set_(new_data)
return None
class SomeModule(nn.Module):
def __init__(self):
super().__init__()
self.param = torch.empty(0)
def forward(self, x):
x = x * 3
# using tenstor and reassign makes it pass the compile but shows the wrong result.
self.param = torch.empty(0)
y = self.param
set_data(y, x)
return y / 3
module = SomeModule()
module = torch.compile(module)
x = torch.randn(3, 3)
with torch.no_grad():
y = module(x)
print(y)
```
Execute the above program shows the wrong results as following:
`tensor([])`
The only version that works is as following (set_ to a local torch.empty(0) tensor.)
```
import torch
from torch import nn
@torch.library.custom_op("mylib::set_data", mutates_args=["param"])
def set_data(param: torch.Tensor, new_data: torch.Tensor) -> None:
param.set_(new_data)
@torch.library.register_fake("mylib::set_data")
def set_data_fake(param: torch.Tensor, new_data: torch.Tensor) -> None:
param.set_(new_data)
return None
class SomeModule(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = x * 3
y = torch.empty(0)
set_data(y, x)
return y / 3
module = SomeModule()
module = torch.compile(module)
x = torch.randn(3, 3)
with torch.no_grad():
y = module(x)
print(y)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20240914+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 4190.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] torch==2.6.0.dev20240914+cpu
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,module: custom-operators,oncall: pt2,module: pt2-dispatcher | low | Critical |
2,622,870,179 | flutter | Turn off NSAsserts in release mode on iOS and macOS | In a typical iOS and macOS project `NSAssert`s are off in release mode. Here's the `ENABLE_NS_ASSERTIONS` build setting in a newly created project:

> Controls whether assertion logic provided by NSAssert is included in the preprocessed source code or is elided during preprocessing. Disabling assertions can improve code performance.
https://github.com/flutter/buildroot/pull/860 attempted to turn off asserts in release mode, however there were some [analyzer errors](https://github.com/flutter/engine/pull/53005#issuecomment-2140975126) when it was rolled into the engine, so the buildroot commit was [reverted](https://github.com/flutter/buildroot/pull/864).
clang tidy:
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8746478531016733745/+/u/test:_test:_lint_host_debug/stdout
Fix the analyzer errors, then reland https://github.com/flutter/buildroot/pull/860 to turn off `ENABLE_NS_ASSERTIONS` in Release mode.
Examples of NSAssert crashes that probably shouldn't be happening in https://github.com/flutter/flutter/issues/148279 | engine,P2,team-ios,triaged-ios | low | Critical |
2,622,871,840 | godot | Modifying material properties on external material do not update view in-game | ### Tested versions
v4.4.dev.custom_build [8004c7524]
### System information
macOS
### Issue description
Material property setters that don't call notify_property_list_changed will not see the change reflected in the viewport.
### Steps to reproduce
Change a material property like cull mode.
### Minimal reproduction project (MRP)
Create a material and try modifying cull mode. Notice nothing appears to change. | bug,topic:core | low | Minor |
2,622,914,041 | three.js | Support shadow mapping with reverse Z | ### Description
To repro, modify `examples/webgl_shadowmap.html` by passing `reverseDepthBuffer: true` into the `WebGLRenderer` constructor.
To get anything to render at all, you will also need to add `renderer.getContext().clearDepth(0)` (see my comment in #29579).
After these modifications, the demo looks okay except that there are no shadows.
### Solution
Ideally shadow mapping is supported without requiring clients to write any extra code, since `DepthBuffer` in `WebGLState` already flips the depth function & depth clear value at a low level. I have not yet been able to find the missing piece.
### Alternatives
Alternatively, maybe the onus is on the applications to handle this in the onBeforeShadow hook?
### Additional context
_No response_ | Enhancement | low | Minor |
2,622,925,547 | next.js | router.asPath is wrong on vercel | ### Link to the code that reproduces this issue
https://github.com/mrbirddev/fontsensei/commit/5a04164bc78771cad30fee3408ad38f93fd43e2b
### To Reproduce
1. deploy this commit on vercel
https://github.com/mrbirddev/fontsensei/commit/5a04164bc78771cad30fee3408ad38f93fd43e2b
--> vercel URL
https://fontsensei-4tsd0b9mj-mr-birds-projects.vercel.app
2. Visit the path `/ja/tags/groovy`
3. Check the rendering results of these lines
https://github.com/mrbirddev/fontsensei/blob/5a04164bc78771cad30fee3408ad38f93fd43e2b/src/browser/i18n/ChooseLocaleModal.tsx#L26-L29
### Current vs. Expected behavior
#### Expected
It works fine locally after `yarn build` & `yarn start`. SSR on localhost
```
<a class="link link-ghost" href="/tag/groovy">English</a>
<a class="link link-ghost" href="/es/tag/groovy">Español</a>
<a class="link link-ghost" href="/pt-br/tag/groovy">Português do Brasil</a>
<a class="link link-ghost" href="/de/tag/groovy">Deutsch</a>
...
```
#### Not expected
SSR on vercel.
```
<a class="link link-ghost" href="/ja/tag/groovy?nxtPslugList=groovy">English</a>
<a class="link link-ghost" href="/es/ja/tag/groovy?nxtPslugList=groovy">Español</a>
<a class="link link-ghost" href="/pt-br/ja/tag/groovy?nxtPslugList=groovy">Português do Brasil</a>
<a class="link link-ghost" href="/de/ja/tag/groovy?nxtPslugList=groovy">Deutsch</a>
...
```
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 23.0.0: Fri Sep 15 14:42:42 PDT 2023; root:xnu-10002.1.13~1/RELEASE_X86_64
Available memory (MB): 65536
Available CPU cores: 16
Binaries:
Node: 20.10.0
npm: 10.2.3
Yarn: 1.22.21
pnpm: N/A
Relevant Packages:
next: 15.0.3-canary.1
eslint-config-next: N/A
react: 18.2.0
react-dom: 18.2.0
typescript: 5.3.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Internationalization (i18n), Navigation
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
_No response_ | bug,Navigation,Internationalization (i18n) | low | Minor |
2,622,957,552 | excalidraw | excalidrawApi don't execute on update for work HMR | As the useRef works in the react, we need to call the function for each call to didUpdate to put the api back. This need for HMR.
I can send mr if you agree
One of the possible solutions
```js
componentDidUpdate(prevProps: AppProps, prevState: AppState) {
this.props.excalidrawAPI?.(this.api)
....
``` | support | low | Minor |
2,622,995,221 | material-ui | [icons-material] Importing from `@mui/icons-material` throws error in Remix/Vite | ### Search keywords
module, cjs, mjs, icons, vite, remix
### Latest version
- [x] I have tested the latest version
### Steps to reproduce
1. Open [reproduction](https://stackblitz.com/edit/remix-run-remix-dvi4ts?file=app%2Froutes%2F_index.tsx)
2. Notice error
### Current behavior
Importing an icon from `@mui/icons-material` throws the following warning and error.
`Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.`
<img width="1020" alt="Screenshot 2024-10-30 at 3 43 40 PM" src="https://github.com/user-attachments/assets/19b7a1fc-bc98-4e85-b27b-a43b32ec1ccd">
`SyntaxError: Cannot use import statement outside a module`

### Expected behavior
Importing an icon from `@mui/icons-material` shouldn't throw an error.
### Context
We're trying to use `@mui/icons-material` in a Remix project.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 14.5
Binaries:
Node: 22.2.0 - ~/.nvm/versions/node/v22.2.0/bin/node
npm: 10.7.0 - ~/.nvm/versions/node/v22.2.0/bin/npm
pnpm: Not Found
Browsers:
Chrome: 130.0.6723.91
Edge: Not Found
Safari: 17.5
npmPackages:
@mui/core-downloads-tracker: 6.1.5
@mui/icons-material: ^6.1.5 => 6.1.5
@mui/material: 6.1.5
@mui/private-theming: 6.1.5
@mui/styled-engine: 6.1.5
@mui/system: 6.1.5
@mui/types: 7.2.18
@mui/utils: 6.1.5
@types/react: ^18.2.20 => 18.3.10
react: ^18.2.0 => 18.3.1
react-dom: ^18.2.0 => 18.3.1
typescript: ^5.1.6 => 5.6.2
```
</details>
| bug 🐛,package: icons | low | Critical |
2,623,041,036 | PowerToys | Shotcuts on Dashboard screen changes based on the current keyboard layout | ### Microsoft PowerToys version
0.85.1
### Installation method
WinGet
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
I'll be using Thai keyboard layout (Kedmanee, to be exact) to reproduce the issue. Not 100% sure if other non-English would exhibit the same behavior or not.
1. Switch to Thai keyboard layout on Windows.
2. Open the Dashboard screen by either
2.1 Open PowerToys Settings from the PowerToys icon inside Windows' notification area, or
2.2 From any page on PowerToys Settings, switch to Dashboard screen.
### ✔️ Expected Behavior
Shortcut key should remains the same regardless of the current keyboard layout.
For example, "Always On Top" should display "⊞ Ctrl T" or "Screen Ruler" should be "⊞ Ctrl Shift M".

### ❌ Actual Behavior
Shortcut key (except the modifier) is different when different keyboard layout is active.
"Always on Top" becomes "⊞ Ctrl ธ" or "Screen Ruler" becomes "⊞ Shift ?".

### Other Software
_No response_ | Issue-Bug,Product-Settings,Priority-1 | low | Minor |
2,623,114,663 | vscode | Error: Trying to add a disposable to a DisposableStore that has already been disposed of. The added object will be leaked! | Running vscode-test on 1.95.0 occurred this issue, working well on 1.94.2.
```
Error: Trying to add a disposable to a DisposableStore that has already been disposed of. The added object will be leaked!
at H3.add (file:///home/vsts/work/1/s/.vscode-test/vscode-linux-x64-1.95.0/resources/app/out/vs/workbench/api/node/extensionHostProcess.js:24:1439)
at yw.n (file:///home/vsts/work/1/s/.vscode-test/vscode-linux-x64-1.95.0/resources/app/out/vs/workbench/api/node/extensionHostProcess.js:120:1748)
```
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.0
- OS Version: MasOS, Windows, Linux
Steps to Reproduce:
1. Execute unit test via test-electron in EH. | debt | low | Critical |
2,623,126,831 | next.js | OPTIONS request stuck when using edge runtime | ### Link to the code that reproduces this issue
https://github.com/yuluyi/edge-runtime-stuck-on-options
### To Reproduce
```
pnpm run dev
curl 'http://localhost:3000' -X 'OPTIONS'
```
### Current vs. Expected behavior
It should work.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:37:13 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T8112
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.2.0
npm: 9.6.6
Yarn: N/A
pnpm: 9.5.0
Relevant Packages:
next: 15.0.2-canary.9 // There is a newer canary version (15.0.3-canary.1) available, please upgrade!
eslint-config-next: 15.0.1
react: 19.0.0-rc-69d4b800-20241021
react-dom: 19.0.0-rc-69d4b800-20241021
typescript: 5.6.3
Next.js Config:
output: N/A
⚠ There is a newer canary version (15.0.3-canary.1) available, please upgrade!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Navigation, Pages Router, Parallel & Intercepting Routes, Partial Prerendering (PPR), Performance, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), Vercel (Deployed), Other (Deployed)
### Additional context
OPTIONS request stuck when using edge runtime. I occasionally found some timeout logs of my project deployed on Vercel. All caused by an OPTIONS request to my root page. After some experiment, I found even I run: next dev, I can still reproduce the issue, as long as I add
```
export const runtime = 'edge'
```
to my page.tsx.
If I use node runtime, everything is fine. | bug,Navigation,Performance,Runtime,Pages Router,Parallel & Intercepting Routes,Partial Prerendering (PPR) | low | Major |
2,623,143,668 | godot | Support the IOR attribute of glass material | ### Tested versions
4.0.4
### System information
macos 14.4 (23E214) godot 4.0.4
### Issue description

Set the ior property of Threejs material and change the refractive index of transparent material. The effect is as follows:

However, in Godot's StandMaterial3D, after setting Refraction, regardless of the scale value, the previously transparent material becomes opaque, and the objects that are blocked behind do not refract on the sphere, as shown in the following figure:


Referring to the refraction property of StandardMaterial3D, the godot shader script code written is as follows:



The effect shown above is very different from IOR. Do you have a plan to adapt to IOR, or do you have any good suggestions to solve this problem?
### Steps to reproduce
none
### Minimal reproduction project (MRP)
[Test0520.zip](https://github.com/user-attachments/files/17567659/Test0520.zip)
| topic:rendering,needs testing,topic:3d | low | Minor |
2,623,184,064 | rust | compiletest: `//@ needs-dynamic-linking` should rule out musl since it supports neither dylibs nor cdylibs | Noticed in #130860. Should double-check the detection mechanism to determine *why* we thought `musl` targets supported dynamic linking in the first place. | T-bootstrap,E-medium,C-bug,A-compiletest,E-needs-investigation | low | Minor |
2,623,187,282 | kubernetes | Implement caching for resolved OIDC distributed claims | ### What would you like to be added?
The oidc distributed claims feature was implemented (see [PR #63213](https://github.com/kubernetes/kubernetes/pull/63213)), but caching for resolved claims was left as a TODO (see [code reference](https://github.com/kubernetes/kubernetes/blob/daef8c2419a638d3925e146d0f5a6b217ea69b74/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/token/oidc/oidc.go#L626)).
I am interested in implementing this caching feature but would like to discuss the approach before proceeding.
### Why is this needed?
Tokens are evaluated on every HTTP request, which makes the process slow | kind/feature,sig/auth,needs-triage | low | Major |
2,623,191,598 | pytorch | Torch Inductor should have a way for new backend to provide build options | ### 🚀 The feature, motivation and pitch
Currently, to compile the code generated by a backend, build options are collected by CppTorchDeviceOptions, there are 2 issues here. First, it covers cpu, cuda and xpu for now, all the codes for different backends are put together with branches. This is an anti pattern. Each should have their own implementation/subclass.
Second, new backends always need to customize the build options, e.g., backend specific include dirs and lib dirs. There should be a way for each backend to provide such information. E.g., in DeviceCodegen, we can add a new fields like build_options.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Minor |
2,623,210,823 | pytorch | TensorBoard images loading error | ### 🐛 Describe the bug
The first time I used TensorBoard I generated y=x and the image was fine, but when I modified the original function to generate y=2x,y=3x and y=x^2 etc. and then regenerated y=x there was a problem, my image was not displayed correctly, y=x was only a part of the image, as shown in the image below.

Sometimes continuing the modification on the original function will not get any image as shown below.

(The port shows 6007 because this image was the result of another test where I used 6007 as the port to display)
My code is as follows:
```
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter("logs")
# writer.add_image()
for i in range(100):
writer.add_scalar("y=x", i, i)
writer.close()
```
Then by typing in pycharm's terminal
```
tensorboard --logdir=logs
```
And then accessing port 6006 to get the result.
I would like to know if this is a problem only for me or for other users as well. I have searched the Issues module and have not found a solution (maybe I missed it), so if someone has raised a similar issue before, I apologize for taking up everyone's time and would appreciate it if you could point me in the direction of a solution, thank you very much!
### Versions
```
PyTorch version: 1.9.0+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 家庭中文版 (10.0.22631 64 位)
GCC version: (MinGW.org GCC-6.3.0-1) 6.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.20 (default, Oct 3 2024, 15:19:54) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: GeForce MX450
Nvidia driver version: 456.71
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
Manufacturer: GenuineIntel
Family: 205
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2419
MaxClockSpeed: 2419
L2CacheSize: 5120
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==1.9.0+cu111
[pip3] torchaudio==0.9.0
[pip3] torchvision==0.10.0+cu111
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 1.9.0+cu111 pypi_0 pypi
[conda] torchaudio 0.9.0 pypi_0 pypi
[conda] torchvision 0.10.0+cu111 pypi_0 pypi
```
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | module: windows,triaged,module: tensorboard | low | Critical |
2,623,225,562 | kubernetes | DRA: Is it possible to add a new resource: ClusterResourceClaim? | ### What would you like to be added?
We plan to develop a DRA plugin for networking (possibly related to CNI drivers). For ease of use, we may create a ResourceClaim in advance and declare some additional configurations in the opaqueConfig. If there is a cluster-scope ResourceClaim, then pods in different namespaces can use the same ResourceClaim. Currently, ResourceClaim is namespace-scope, meaning each namespace must create at least one ResourceClaim.
### Why is this needed?
add a new resource: ClusterResourceClaim | sig/node,kind/feature,needs-triage,wg/device-management | medium | Major |
2,623,251,591 | flutter | Evaluate ImageFilter.blur tileMode behaviors | The engine PR https://github.com/flutter/engine/pull/55552 changed the default behavior of the ImageFilter.blur as will be documented in the next release notes via https://github.com/flutter/website/pull/11338
While implementing the new defaults, we observed some behavioral issues that suggest that we might want to change the handling of the tileMode even further before the next release.
### Strange kaleidoscope
The main issue is that for rendered shapes, the tile modes other than `decal` don't seem to exhibit desirable behavior. The most surprising behavior happens with the old default tile mode of `clamp` as can be seen in the following images and video. Note that Impeller currently produces output for `clamp` that is identical to what it produces for `decal` and that behavior is consistent with the description of the `decal` mode. Skia attempts to produce output that resembles the description for `clamp` but has a number of inconsistent anomalies. The following images were generated by a Flutter app that draws a round rect with an ImageFilter.blur in the Paint object and then various properties of that rendering can be modified with a slider.
<details>
<summary>Skia static sample</summary>
<img width="777" alt="skia blurred rendering gallery" src="https://github.com/user-attachments/assets/bf8754a5-0dd9-4847-aabf-cd3d1aeab263">
</details>
<details>
<summary>Impeller static sample</summary>
<img width="777" alt="impeller blurred rendering gallery" src="https://github.com/user-attachments/assets/1555bc9a-6b67-4f06-a1e3-1f0f43abf631">
</details>
The primary differences between these 2 outputs are that Impeller treats `clamp` mode as if it were `decal` and the reflections and repeats at the edges for `mirror` and `repeated` mode are more crowded for Skia than for Impeller. This is likely due to Impeller rounding out the temporary surface used to render the shape before the ImageFilter is applied causing it to contain a border of transparent pixels. The `clamp` mode would then clamp to these transparent edge pixels and look like `decal` mode while the `mirror` and `repeated` modes would have the multiple copies of the source image spread out more than Skia.
Another issue with `clamp` mode is that it is hard to get right when the shape being filtered has some unpredictable edge registrations (how the pixels at the edges of the shape line up with actual pixels in the output surface). This can make the edge pixels sampled for the `clamp` mode contain pixels of varying opacity depending on whether the shape barely filled them or filled them nearly completely. This would be somewhat surprising as can be seen in the static samples above if the shapes occupied a static space on the screen, but if the shapes are moved, scaled, or resized, their edge registrations could vary widely as they are repositioned as can be seen in the following video. This video is showing the Skia output as the Impeller implementation does not attempt to preserve the edges of the shape within the temporary surface.
<details>
<summary>Skia animated sample</summary>
[Skia rendering gallery animated (trimmed).mov](https://github.com/user-attachments/assets/ee18604a-1c9d-4734-9aa3-6e6867225d30)
</details>
These behaviors suggest the following potential changes in behavior for the new defaults.
- Change the shape rendering methods (and the application of a blur filter to saveLayer content as well) to always use a `decal` mode so that the strange and inconsistent spikes for the `clamp` mode are avoided and the partial, difficult to control kaleidoscope effect for `mirror` and `repeated` modes are also avoided
- The concept/justification for this behavior could simply be that we don't implement the other modes for this operation, or
- that the operation itself represents drawing the shape on an infinitely large blank surface and then filtering the surface - which essentially makes all modes appear the same as `decal` mode.
- Other uses deal with source information that is complete and comes with naturally defined "edge" pixels that are not inferred from the pixel registration of a transformed potentially irregular shape, to wit:
- The drawImage variants (including drawAtlas), which take their source pixels from a well defined image, would still provide support for all 4 modes and a default of `clamp` as already implemented in https://github.com/flutter/engine/pull/55552
- The backdrop handling for BackdropFilter with a blur ImageFilter, which takes its source from a rendered framebuffer with well defined edges, will still provide support for all 4 modes and a default of `mirror` as we determined prior to making the change. | engine,P3,team-engine,triaged-engine | low | Minor |
2,623,350,325 | ui | [bug]: Multiple DropdownMenu in sidebar causes freeze on mobile screen | ### Describe the bug
https://ui.shadcn.com/blocks#sidebar-07
If you open multiple dropdown menus on a phone screen and close the sidebar, the whole app freezes.
### Affected component/components
Sidebar
### How to reproduce
1. Visit https://ui.shadcn.com/blocks#sidebar-07
2. Click on the phone screen.
3. Open the sidebar.
4. Open the dropdown menu in the company switcher (Acme).
5. Open the profile dropdown menu (shadcn at the bottom).
6. Click away to close the sidebar.
7. Now, you can't do anything; you can't even reopen the sidebar.
### Codesandbox/StackBlitz link
https://ui.shadcn.com/blocks#sidebar-07
### Logs
_No response_
### System Info
```bash
google chrome, windows 11
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,623,441,943 | godot | [GDScript] Crash with Scripting Error | ### Tested versions
v4.3.stable.official [77dcf97d8]
v4.4.dev.custom_build [8004c7524] (master)
### System information
Godot v4.4.dev (8004c7524) - Windows 10.0.17763 - Multi-window, 4 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Ti (NVIDIA; 32.0.15.6109) - AMD Ryzen 9 5900X 12-Core Processor (24 threads)
### Issue description
A plugin I'm working on encountered this issue. I tried to repeat the MRP for several times and get a sets of different stack traces as follows:
The code snippets related to this issue:
```gdscript
# L822
func _find_texture_in_dir(source_tex : Texture2D, directory : EditorFileSystemDirectory, scan_result : Array[EditingAtlasTextureInfo]):
# L823
var file_count := directory.get_file_count();
# L824
for i in range(file_count):
# L825
var file_path := directory.get_file_path(i);
# L826
var resource := ResourceLoader.load(file_path, "", ResourceLoader.CACHE_MODE_IGNORE);
# L827
var atlas_candidate := resource as AtlasTexture;
# L828
if atlas_candidate and atlas_candidate.atlas == source_tex:
# L829
scan_result.append(EditingAtlasTextureInfo.create(atlas_candidate, file_path));
# L831
func _find_texture_in_dir_recursive(source_tex : Texture2D, directory : EditorFileSystemDirectory, scan_result : Array[EditingAtlasTextureInfo]):
# L832
_find_texture_in_dir(source_tex, directory, scan_result);
# L833
var sub_dir_count := directory.get_subdir_count();
# L834
for i in range(sub_dir_count):
# L835
var sub_dir := directory.get_subdir(i);
# L836
_find_texture_in_dir_recursive(source_tex, sub_dir, scan_result);
```
Headers are the same:
```
Godot Engine v4.4.dev.custom_build.8004c7524 (2024-10-30 00:26:02 UTC) - https://godotengine.org
WARNING: GENERAL - Message Id Number: 0 | Message Id Name: Loader Message
windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
Objects - 1
Object[0] - VK_OBJECT_TYPE_INSTANCE, Handle 1937273426288
at: RenderingContextDriverVulkan::_debug_messenger_callback (drivers\vulkan\rendering_context_driver_vulkan.cpp:639)
Vulkan 1.3.280 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 4060 Ti
```
#### Tempt 1
<details>
```
SCRIPT ERROR: Bad address index.
at: _update_inspecting_metrics (res://addons/AtlasTextureManager/atlastexture_manager.gd:826)
SCRIPT ERROR: Internal script error! Opcode: 332 (please report).
at: <anonymous lambda> (res://addons/AtlasTextureManager/atlastexture_manager.gd:832)
================================================================
CrashHandlerException: Program crashed
Engine version: Godot Engine v4.4.dev.custom_build (8004c7524fb9f43425c4d6f614410a76678e0f7c)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[0] Variant::clear (D:\godot\core\variant\variant.h:325)
[1] Variant::clear (D:\godot\core\variant\variant.h:325)
[2] Variant::~Variant (D:\godot\core\variant\variant.h:820)
[3] Variant::`scalar deleting destructor'
[4] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:3889)
[5] GDScriptInstance::callp (D:\godot\modules\gdscript\gdscript.cpp:2073)
[6] Object::callp (D:\godot\core\object\object.cpp:791)
[7] Variant::callp (D:\godot\core\variant\variant_call.cpp:1227)
[8] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:1924)
[9] GDScriptInstance::callp (D:\godot\modules\gdscript\gdscript.cpp:2073)
[10] Object::callp (D:\godot\core\object\object.cpp:791)
[11] Variant::callp (D:\godot\core\variant\variant_call.cpp:1227)
[12] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:1924)
[13] GDScriptInstance::callp (D:\godot\modules\gdscript\gdscript.cpp:2073)
[14] Object::callp (D:\godot\core\object\object.cpp:791)
[15] Variant::callp (D:\godot\core\variant\variant_call.cpp:1227)
[16] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:1924)
[17] GDScriptLambdaSelfCallable::call (D:\godot\modules\gdscript\gdscript_lambda_callable.cpp:279)
[18] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[19] _VariantCall::func_Callable_call (D:\godot\core\variant\variant_call.cpp:1039)
[20] `_register_variant_builtin_methods_misc'::`2'::Method_Callable_call::call (D:\godot\core\variant\variant_call.cpp:2123)
[21] Variant::callp (D:\godot\core\variant\variant_call.cpp:1239)
[22] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:1924)
[23] GDScriptLambdaCallable::call (D:\godot\modules\gdscript\gdscript_lambda_callable.cpp:118)
[24] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[25] Object::emit_signalp (D:\godot\core\object\object.cpp:1201)
[26] Node::emit_signalp (D:\godot\scene\main\node.cpp:3975)
[27] Object::emit_signal<> (D:\godot\core\object\object.h:920)
[28] BaseButton::_pressed (D:\godot\scene\gui\base_button.cpp:138)
[29] BaseButton::on_action_event (D:\godot\scene\gui\base_button.cpp:174)
[30] BaseButton::gui_input (D:\godot\scene\gui\base_button.cpp:69)
[31] Control::_call_gui_input (D:\godot\scene\gui\control.cpp:1823)
[32] Viewport::_gui_call_input (D:\godot\scene\main\viewport.cpp:1573)
[33] Viewport::_gui_input_event (D:\godot\scene\main\viewport.cpp:1837)
[34] Viewport::push_input (D:\godot\scene\main\viewport.cpp:3176)
[35] Window::_window_input (D:\godot\scene\main\window.cpp:1680)
[36] call_with_variant_args_helper<Window,Ref<InputEvent> const &,0> (D:\godot\core\variant\binder_common.h:304)
[37] call_with_variant_args<Window,Ref<InputEvent> const &> (D:\godot\core\variant\binder_common.h:418)
[38] CallableCustomMethodPointer<Window,void,Ref<InputEvent> const &>::call (D:\godot\core\object\callable_method_pointer.h:107)
[39] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[40] Callable::call<Ref<InputEvent> > (D:\godot\core\variant\variant.h:906)
[41] DisplayServerWindows::_dispatch_input_event (D:\godot\platform\windows\display_server_windows.cpp:3784)
[42] DisplayServerWindows::_dispatch_input_events (D:\godot\platform\windows\display_server_windows.cpp:3754)
[43] Input::_parse_input_event_impl (D:\godot\core\input\input.cpp:805)
[44] Input::flush_buffered_events (D:\godot\core\input\input.cpp:1086)
[45] DisplayServerWindows::process_events (D:\godot\platform\windows\display_server_windows.cpp:3234)
[46] OS_Windows::run (D:\godot\platform\windows\os_windows.cpp:1771)
[47] widechar_main (D:\godot\platform\windows\godot_windows.cpp:180)
[48] _main (D:\godot\platform\windows\godot_windows.cpp:206)
[49] main (D:\godot\platform\windows\godot_windows.cpp:220)
[50] WinMain (D:\godot\platform\windows\godot_windows.cpp:234)
[51] __scrt_common_main_seh (D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
[52] <couldn't map PC to fn name>
-- END OF BACKTRACE --
================================================================
```
</details>
#### Tempt 2
<details>
```
SCRIPT ERROR: Invalid assignment of property or key 'text' with value of type 'CompressedTexture2D' on a base object of type 'EditorFileSystemDirectory'.
at: _label (res://addons/AtlasTextureManager/atlastexture_manager.gd:749)
================================================================
CrashHandlerException: Program crashed
Engine version: Godot Engine v4.4.dev.custom_build (8004c7524fb9f43425c4d6f614410a76678e0f7c)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[0] Variant::clear (D:\godot\core\variant\variant.h:325)
[1] Variant::~Variant (D:\godot\core\variant\variant.h:820)
[2] Variant::`scalar deleting destructor'
[3] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:3889)
[4] GDScriptLambdaSelfCallable::call (D:\godot\modules\gdscript\gdscript_lambda_callable.cpp:279)
[5] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[6] _VariantCall::func_Callable_call (D:\godot\core\variant\variant_call.cpp:1039)
[7] `_register_variant_builtin_methods_misc'::`2'::Method_Callable_call::call (D:\godot\core\variant\variant_call.cpp:2123)
[8] Variant::callp (D:\godot\core\variant\variant_call.cpp:1239)
[9] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:1924)
[10] GDScriptLambdaCallable::call (D:\godot\modules\gdscript\gdscript_lambda_callable.cpp:118)
[11] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[12] Object::emit_signalp (D:\godot\core\object\object.cpp:1201)
[13] Node::emit_signalp (D:\godot\scene\main\node.cpp:3975)
[14] Object::emit_signal<> (D:\godot\core\object\object.h:920)
[15] BaseButton::_pressed (D:\godot\scene\gui\base_button.cpp:138)
[16] BaseButton::on_action_event (D:\godot\scene\gui\base_button.cpp:174)
[17] BaseButton::gui_input (D:\godot\scene\gui\base_button.cpp:69)
[18] Control::_call_gui_input (D:\godot\scene\gui\control.cpp:1823)
[19] Viewport::_gui_call_input (D:\godot\scene\main\viewport.cpp:1573)
[20] Viewport::_gui_input_event (D:\godot\scene\main\viewport.cpp:1837)
[21] Viewport::push_input (D:\godot\scene\main\viewport.cpp:3176)
[22] Window::_window_input (D:\godot\scene\main\window.cpp:1680)
[23] call_with_variant_args_helper<Window,Ref<InputEvent> const &,0> (D:\godot\core\variant\binder_common.h:304)
[24] call_with_variant_args<Window,Ref<InputEvent> const &> (D:\godot\core\variant\binder_common.h:418)
[25] CallableCustomMethodPointer<Window,void,Ref<InputEvent> const &>::call (D:\godot\core\object\callable_method_pointer.h:107)
[26] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[27] Callable::call<Ref<InputEvent> > (D:\godot\core\variant\variant.h:906)
[28] DisplayServerWindows::_dispatch_input_event (D:\godot\platform\windows\display_server_windows.cpp:3784)
[29] DisplayServerWindows::_dispatch_input_events (D:\godot\platform\windows\display_server_windows.cpp:3754)
[30] Input::_parse_input_event_impl (D:\godot\core\input\input.cpp:805)
[31] Input::flush_buffered_events (D:\godot\core\input\input.cpp:1086)
[32] DisplayServerWindows::process_events (D:\godot\platform\windows\display_server_windows.cpp:3234)
[33] OS_Windows::run (D:\godot\platform\windows\os_windows.cpp:1771)
[34] widechar_main (D:\godot\platform\windows\godot_windows.cpp:180)
[35] _main (D:\godot\platform\windows\godot_windows.cpp:206)
[36] main (D:\godot\platform\windows\godot_windows.cpp:220)
[37] WinMain (D:\godot\platform\windows\godot_windows.cpp:234)
[38] __scrt_common_main_seh (D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
[39] <couldn't map PC to fn name>
-- END OF BACKTRACE --
================================================================
```
</details>
#### Tempt 3
<details>
```
SCRIPT ERROR: Invalid access to property or key 'margin' on a base object of type 'Nil'.
at: <anonymous lambda> (res://addons/AtlasTextureManager/atlastexture_manager.gd:510)
SCRIPT ERROR: Bad address index.
at: _zoom_button (res://addons/AtlasTextureManager/atlastexture_manager.gd:74)
================================================================
CrashHandlerException: Program crashed
Engine version: Godot Engine v4.4.dev.custom_build (8004c7524fb9f43425c4d6f614410a76678e0f7c)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[0] Variant::clear (D:\godot\core\variant\variant.h:325)
[1] Variant::clear (D:\godot\core\variant\variant.h:325)
[2] Variant::~Variant (D:\godot\core\variant\variant.h:820)
[3] Variant::`scalar deleting destructor'
[4] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:3889)
[5] GDScriptLambdaCallable::call (D:\godot\modules\gdscript\gdscript_lambda_callable.cpp:118)
[6] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[7] Object::emit_signalp (D:\godot\core\object\object.cpp:1201)
[8] Node::emit_signalp (D:\godot\scene\main\node.cpp:3975)
[9] Object::emit_signal<> (D:\godot\core\object\object.h:920)
[10] BaseButton::_pressed (D:\godot\scene\gui\base_button.cpp:138)
[11] BaseButton::on_action_event (D:\godot\scene\gui\base_button.cpp:174)
[12] BaseButton::gui_input (D:\godot\scene\gui\base_button.cpp:69)
[13] Control::_call_gui_input (D:\godot\scene\gui\control.cpp:1823)
[14] Viewport::_gui_call_input (D:\godot\scene\main\viewport.cpp:1573)
[15] Viewport::_gui_input_event (D:\godot\scene\main\viewport.cpp:1837)
[16] Viewport::push_input (D:\godot\scene\main\viewport.cpp:3176)
[17] Window::_window_input (D:\godot\scene\main\window.cpp:1680)
[18] call_with_variant_args_helper<Window,Ref<InputEvent> const &,0> (D:\godot\core\variant\binder_common.h:304)
[19] call_with_variant_args<Window,Ref<InputEvent> const &> (D:\godot\core\variant\binder_common.h:418)
[20] CallableCustomMethodPointer<Window,void,Ref<InputEvent> const &>::call (D:\godot\core\object\callable_method_pointer.h:107)
[21] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[22] Callable::call<Ref<InputEvent> > (D:\godot\core\variant\variant.h:906)
[23] DisplayServerWindows::_dispatch_input_event (D:\godot\platform\windows\display_server_windows.cpp:3784)
[24] DisplayServerWindows::_dispatch_input_events (D:\godot\platform\windows\display_server_windows.cpp:3754)
[25] Input::_parse_input_event_impl (D:\godot\core\input\input.cpp:805)
[26] Input::flush_buffered_events (D:\godot\core\input\input.cpp:1086)
[27] DisplayServerWindows::process_events (D:\godot\platform\windows\display_server_windows.cpp:3234)
[28] OS_Windows::run (D:\godot\platform\windows\os_windows.cpp:1771)
[29] widechar_main (D:\godot\platform\windows\godot_windows.cpp:180)
[30] _main (D:\godot\platform\windows\godot_windows.cpp:206)
[31] main (D:\godot\platform\windows\godot_windows.cpp:220)
[32] WinMain (D:\godot\platform\windows\godot_windows.cpp:234)
[33] __scrt_common_main_seh (D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
[34] <couldn't map PC to fn name>
-- END OF BACKTRACE --
================================================================
```
</details>
#### Tempt 4
<details>
```
ERROR: Condition ' !nc ' is true. Breaking..:
at: GDScriptFunction::call (modules\gdscript\gdscript_vm.cpp:1623)
SCRIPT ERROR: Internal script error! Opcode: 31 (please report).
at: (res://addons/AtlasTextureManager/atlastexture_manager.gd:827)
================================================================
CrashHandlerException: Program crashed
Engine version: Godot Engine v4.4.dev.custom_build (8004c7524fb9f43425c4d6f614410a76678e0f7c)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[0] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:2250)
[1] GDScriptInstance::callp (D:\godot\modules\gdscript\gdscript.cpp:2073)
[2] Object::callp (D:\godot\core\object\object.cpp:791)
[3] Variant::callp (D:\godot\core\variant\variant_call.cpp:1227)
[4] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:1924)
[5] GDScriptInstance::callp (D:\godot\modules\gdscript\gdscript.cpp:2073)
[6] Object::callp (D:\godot\core\object\object.cpp:791)
[7] Variant::callp (D:\godot\core\variant\variant_call.cpp:1227)
[8] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:1924)
[9] GDScriptInstance::callp (D:\godot\modules\gdscript\gdscript.cpp:2073)
[10] Object::callp (D:\godot\core\object\object.cpp:791)
[11] Variant::callp (D:\godot\core\variant\variant_call.cpp:1227)
[12] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:1924)
[13] GDScriptLambdaSelfCallable::call (D:\godot\modules\gdscript\gdscript_lambda_callable.cpp:279)
[14] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[15] _VariantCall::func_Callable_call (D:\godot\core\variant\variant_call.cpp:1039)
[16] `_register_variant_builtin_methods_misc'::`2'::Method_Callable_call::call (D:\godot\core\variant\variant_call.cpp:2123)
[17] Variant::callp (D:\godot\core\variant\variant_call.cpp:1239)
[18] GDScriptFunction::call (D:\godot\modules\gdscript\gdscript_vm.cpp:1924)
[19] GDScriptLambdaCallable::call (D:\godot\modules\gdscript\gdscript_lambda_callable.cpp:118)
[20] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[21] Object::emit_signalp (D:\godot\core\object\object.cpp:1201)
[22] Node::emit_signalp (D:\godot\scene\main\node.cpp:3975)
[23] Object::emit_signal<> (D:\godot\core\object\object.h:920)
[24] BaseButton::_pressed (D:\godot\scene\gui\base_button.cpp:138)
[25] BaseButton::on_action_event (D:\godot\scene\gui\base_button.cpp:174)
[26] BaseButton::gui_input (D:\godot\scene\gui\base_button.cpp:69)
[27] Control::_call_gui_input (D:\godot\scene\gui\control.cpp:1823)
[28] Viewport::_gui_call_input (D:\godot\scene\main\viewport.cpp:1573)
[29] Viewport::_gui_input_event (D:\godot\scene\main\viewport.cpp:1837)
[30] Viewport::push_input (D:\godot\scene\main\viewport.cpp:3176)
[31] Window::_window_input (D:\godot\scene\main\window.cpp:1680)
[32] call_with_variant_args_helper<Window,Ref<InputEvent> const &,0> (D:\godot\core\variant\binder_common.h:304)
[33] call_with_variant_args<Window,Ref<InputEvent> const &> (D:\godot\core\variant\binder_common.h:418)
[34] CallableCustomMethodPointer<Window,void,Ref<InputEvent> const &>::call (D:\godot\core\object\callable_method_pointer.h:107)
[35] Callable::callp (D:\godot\core\variant\callable.cpp:57)
[36] Callable::call<Ref<InputEvent> > (D:\godot\core\variant\variant.h:906)
[37] DisplayServerWindows::_dispatch_input_event (D:\godot\platform\windows\display_server_windows.cpp:3784)
[38] DisplayServerWindows::_dispatch_input_events (D:\godot\platform\windows\display_server_windows.cpp:3754)
[39] Input::_parse_input_event_impl (D:\godot\core\input\input.cpp:805)
[40] Input::flush_buffered_events (D:\godot\core\input\input.cpp:1086)
[41] DisplayServerWindows::process_events (D:\godot\platform\windows\display_server_windows.cpp:3234)
[42] OS_Windows::run (D:\godot\platform\windows\os_windows.cpp:1771)
[43] widechar_main (D:\godot\platform\windows\godot_windows.cpp:180)
[44] _main (D:\godot\platform\windows\godot_windows.cpp:206)
[45] main (D:\godot\platform\windows\godot_windows.cpp:220)
[46] WinMain (D:\godot\platform\windows\godot_windows.cpp:234)
[47] __scrt_common_main_seh (D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
[48] <couldn't map PC to fn name>
-- END OF BACKTRACE --
================================================================
```
</details>
### Steps to reproduce
1. Download the MRP and open it with one of the tested versions.
2. switch to the `AtlasTextue Manager` window at the top left dock.
3. Double-click the `icon.svg` image asset in the `FileSystem` window.
4. Click the `Scan in Project` button in the `AtlasTextue Manager` window.
https://github.com/user-attachments/assets/ab24c711-5d91-48d2-88bd-8ea74ba2734a
### Minimal reproduction project (MRP)
[MRP-GDScript-Crash.zip](https://github.com/user-attachments/files/17569447/MRP-GDScript-Crash.zip)
| bug,topic:gdscript,crash | low | Critical |
2,623,464,482 | PowerToys | 将Ctrl+c 的复制功能映射到一个键盘按键后,按这个键第二次,复制的文本内容还是第一次的文本内容 | ### Microsoft PowerToys version
0.85.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
将Ctrl+c 的复制功能映射到一个键盘按键(Scroll Lock)后,按这个键第二次,复制的文本内容还是第一次选定的文本内容。
期望复制的内容和映射按键实际复制得到的内容不符。
且会出现是的Ctrl持续处于按下状态(这个是不期望产生的)
### ✔️ Expected Behavior
1、将Ctrl+c 的复制功能映射到一个键盘按键(Scroll Lock)后,按这个键第二次,复制的文本内容还是第一次选定的文本内容。(期望复制第二次选中的文本内容),期望复制的内容和映射按键实际复制得到的内容不符。
2、且会出现是的Ctrl持续处于按下状态(这个是不期望产生的)(期望按键完毕后,不要一直是Ctrl处于按下的状态)
### ❌ Actual Behavior
1、将Ctrl+c 的复制功能映射到一个键盘按键(Scroll Lock)后,按这个键第二次,复制的文本内容还是第一次选定的文本内容,期望复制的内容和映射按键实际复制得到的内容不符。
2、连续两次按映射后的按键(Scroll Lock),会出现是的Ctrl持续处于按下状态
### Other Software
bug修复后请告知我:zhangbinbinhi@outlook.com
| Issue-Bug,Needs-Triage | low | Critical |
2,623,496,502 | tauri | [bug] build android failed on windows11 | ### Describe the bug
can build app of `x86_64-pc-windows-msvc` .
But cann't build android app.
Those code does not work when compiling Android in this project
```
$ cat ~/.cargo/config.toml
# [source.crates-io]
# replace-with = 'mirror'
# [source.mirror]
# registry = "https://mirrors.tuna.tsinghua.edu.cn/git/crates.io-index.git"
# [target.aarch64-linux-android]
# linker = "C:\\Users\\Administrator\\AppData\\Local\\Android\\Sdk\\ndk\\28.0.12433566\\toolchains\\llvm\\prebuilt\\windows-x86_64\\bin\\aarch64-linux-android35-clang"
```
by the way, no linker `aarch64-linux-android-clang`
### Reproduction
_No response_
### Expected behavior
```
`Failed to run `cargo build`: command ["cargo", "build", "--package", "vending-app", "--manifest-path", "C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\Cargo.toml", "--target", "i686-linux-android", "--features", "tauri/custom-protocol tauri/rustls-tls tauri/custom-protocol tauri/rustls-tls", "--lib"] exited with code -1073741819
Error [tauri_cli_node] `Failed to run `cargo build`: command ["cargo", "build", "--package", "vending-app", "--manifest-path", "C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\Cargo.toml", "--target", "i686-linux-android", "--features", "tauri/custom-protocol tauri/rustls-tls tauri/custom-protocol tauri/rustls-tls", "--lib"] exited with code -1073741819
error: script "tauri" exited with code 1
Starting process 'command 'bun.cmd''. Working directory: C:\Users\Administrator\code\rust\tauri-app\src-tauri Command: bun.cmd tauri android android-studio-script -v --target i686
> Task :app:rustBuildX86Debug FAILED
Could not execute [report metric STATISTICS_COLLECT_METRICS_OVERHEAD]
Could not execute [report metric STATISTICS_COLLECT_METRICS_OVERHEAD]
AAPT2 aapt2-8.5.1-11315950-windows Daemon #0: shutdown
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:rustBuildX86Debug'.
> A problem occurred starting process 'command 'bun.cmd''
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
BUILD FAILED in 35s
197 actionable tasks: 16 executed, 181 up-to-date
Watched directory hierarchies: [C:\Users\Administrator\code\rust\tauri-app\src-tauri\gen\android]
Failed to build AAB: command ["C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\gen/android"] exited with code 1: command ["C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\gen/android"] exited with code 1
Error [tauri_cli_node] Failed to build AAB: command ["C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\gen/android"] exited with code 1: command ["C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\Administrator\\code\\rust\\tauri-app\\src-tauri\\gen/android"] exited with code 1
error: script "tauri" exited with code 1
```
### tauri.config.json
```json
{
"$schema": "../node_modules/@tauri-apps/cli/config.schema.json",
"productName": "xxxx xxx App",
"version": "1.1.0",
"identifier": "com.xxx.xxx.app",
"build": {
"beforeDevCommand": "bun run dev",
"devUrl": "http://localhost:5173",
"beforeBuildCommand": "bun run build",
"frontendDist": "../dist"
},
"app": {
"windows": [
{
"fullscreen": false,
"resizable": true,
"title": "xxx xxx App",
"width": 800,
"height": 1280,
"label": "main",
"visible": true
}
],
"security": {
"csp": null
}
},
"bundle": {
"active": true,
"targets": "all",
"icon": [
"icons/32x32.png",
"icons/128x128.png",
"icons/128x128@2x.png",
"icons/icon.icns",
"icons/icon.ico"
],
"externalBin": [],
"android": {
"minSdkVersion": 24
},
"resources": [
"resources/**/*"
],
"publisher": "xxx Metal Products., LTD"
},
"plugins": {
"board": {
"protocol": "",
"broker": "",
"port": 1683,
"username": "",
"password": "",
"merchant_id": "",
"app_key": ""
}
}
}
```
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
✔ WebView2: 130.0.2849.56
✔ MSVC:
- Visual Studio Community 2022
- Visual Studio 生成工具 2022
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 22.10.0
- yarn: 1.22.22
- npm: 10.9.0
- bun: 1.1.33
[-] Packages
- tauri 🦀: 2.0.6
- tauri-build 🦀: 2.0.2
- wry 🦀: 0.46.3
- tao 🦀: 0.30.5
- tauri-cli 🦀: 2.0.4
- @tauri-apps/api : 2.0.3
- @tauri-apps/cli : 2.0.5
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.0
- @tauri-apps/plugin-shell : 2.0.1
- tauri-plugin-process 🦀: 2.0.1
- @tauri-apps/plugin-process : 2.0.0
- tauri-plugin-dialog 🦀: 2.0.3
- @tauri-apps/plugin-dialog : 2.0.1
- tauri-plugin-autostart 🦀: 2.0.1
- @tauri-apps/plugin-autostart : 2.0.0
- tauri-plugin-barcode-scanner 🦀: 2.0.1
- @tauri-apps/plugin-barcode-scanner : not installed!
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
- tauri-plugin-fs 🦀: 2.0.3
- @tauri-apps/plugin-fs : 2.0.1
- tauri-plugin-log 🦀: 2.0.1
- @tauri-apps/plugin-log : 2.0.0
- tauri-plugin-store 🦀: 2.1.0
- @tauri-apps/plugin-store : 2.1.0
- tauri-plugin-cli 🦀: 2.0.1
- @tauri-apps/plugin-cli : 2.0.0
- tauri-plugin-nfc 🦀: 2.0.1
- @tauri-apps/plugin-nfc : 2.0.0
- tauri-plugin-notification 🦀: 2.0.1
- @tauri-apps/plugin-notification : 2.0.0
- tauri-plugin-websocket 🦀: 2.0.1
- @tauri-apps/plugin-websocket : not installed!
- tauri-plugin-http 🦀: 2.0.3
- @tauri-apps/plugin-http : 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:5173/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
I can compile successfully in debian gun env | type: bug,status: needs triage | low | Critical |
2,623,537,378 | PowerToys | PowerToys.PowerLauncher crashing upon UAC prompt | ### Microsoft PowerToys version
0.85.1
### Installation method
WinGet
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Replicating the issue is a bit troublesome but it tends to happen on Win11 24H2 (26100.2033) after a UAC prompt is requested. The UAC doesn't ever pop up, but if the system is idle long enough or CTRL+ALT+DELETE is pressed the desktop returns but the system is not responsive and a hard shutdown is required for the system to be operational again.
**Description**
Faulting application name: PowerToys.PowerLauncher.exe, version: 0.85.1.0, time stamp: 0x66960000
Faulting module name: KERNELBASE.dll, version: 10.0.26100.1882, time stamp: 0xdebc683b
Exception code: 0xc000041d
Fault offset: 0x00000000000c83ea
Faulting process id: 0x409C
Faulting application start time: 0x1DB2904D970840A
Faulting application path: C:\Program Files\PowerToys\PowerToys.PowerLauncher.exe
Faulting module path: C:\WINDOWS\System32\KERNELBASE.dll
Report Id: 1c5ca336-eee6-4871-adf7-15c8ed0a2820
Faulting package full name:
Faulting package-relative application ID:

**Description**
Faulting application name: PowerToys.PowerLauncher.exe, version: 0.85.1.0, time stamp: 0x66960000
Faulting module name: KERNELBASE.dll, version: 10.0.26100.1882, time stamp: 0xdebc683b
Exception code: 0xe0434352
Fault offset: 0x00000000000c83ea
Faulting process id: 0x409C
Faulting application start time: 0x1DB2904D970840A
Faulting application path: C:\Program Files\PowerToys\PowerToys.PowerLauncher.exe
Faulting module path: C:\WINDOWS\System32\KERNELBASE.dll
Report Id: 641ffa27-eef7-45f6-9a8e-bb251ca54e00
Faulting package full name:
Faulting package-relative application ID:

### ✔️ Expected Behavior
UAC prompt to appear to enter administrative credentials
### ❌ Actual Behavior
UAC prompt never appeared, despite going into admin approval mode. After pressing CTRL+ALT+DELETE the desktop reappeared but the system was not responsive and no keyboard commands or mouse clicks would work, but the cursor would very sluggishly move around.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Critical |
2,623,618,927 | godot | GDScript code completion is extremely laggy on Android Editor | ### Tested versions
Reproducible in build of current master branch - Godot v4.4.dev (8004c7524).
Not reproducible in 4.4.dev3 / 4.3 stable
### System information
Godot v4.4.dev (8004c7524) - Android 14 - Single-window, 1 monitor - Vulkan (Mobile) - integrated Adreno (TM) 618 - (8 threads)
### Issue description
GDScript autocomplete is now extremely laggy on the Android editor, with the editor freezing until the code completion dialog appears. The code completion delay and idle auto parse delay are in their defaults.
This does not happen on 4.4.dev3 or earlier.
### Steps to reproduce
Just create a gdscript and type anything that prompts auto complete.
### Minimal reproduction project (MRP)
None | platform:android,topic:gdscript,topic:editor,regression,performance | low | Major |
2,623,622,336 | rust | Panic in nightly 1.83.0 and 1.84.0 with opt-level >= 1 when unwrapping Some variant | When calling an unwrap on a value that *should* be Some, i instead get an unwrap on None error. Attaching a debugger seems to show an invalid memory error.
This issue happens only when opt-level is set to at least 1, (aka in dev profile, no panic happens, and in release it does), and only happens in rust nightly 1.83.0 and 1.84.0, it does not happen on stable and nightly 1.82.0.
~~I'm not sure if the issue is in my code or in the sprs crate, i've filed [an issue](https://github.com/sparsemat/sprs/issues/370) there and also here just to make sure.~~
EDIT: managed to narrow down the bug by removing the sprs crate, this is only pure rust
Here is the [repo for minimum example](https://github.com/Specy/microlp/tree/panic-bug) or in [the playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=0b00575c9057e816cddce89d00a0d856), there is an even smaller reproduction in the [comments](https://github.com/rust-lang/rust/issues/132353#issuecomment-2446804777)
To reproduce, run `cargo run --release` which will panic, while if running `cargo run` it won't panic
I've tried to run this with miri, nothing there. Also tried to run the release mode with bounds checking turned on, but nothing changed
### Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (3f1be1ec7 2024-10-28)
binary: rustc
commit-hash: 3f1be1ec7ec3d8e80beb381ee82164a0aa3ca777
commit-date: 2024-10-28
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
<details><summary>Backtrace</summary>
<p>
```
thread 'main' panicked at src/main.rs:122:27:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: rust_begin_unwind
at /rustc/3f1be1ec7ec3d8e80beb381ee82164a0aa3ca777/library/std/src/panicking.rs:665:5
1: core::panicking::panic_fmt
at /rustc/3f1be1ec7ec3d8e80beb381ee82164a0aa3ca777/library/core/src/panicking.rs:75:14
2: core::panicking::panic
at /rustc/3f1be1ec7ec3d8e80beb381ee82164a0aa3ca777/library/core/src/panicking.rs:152:5
3: core::option::unwrap_failed
at /rustc/3f1be1ec7ec3d8e80beb381ee82164a0aa3ca777/library/core/src/option.rs:2008:5
4: microlp::order_simple
5: microlp::main
at ./src/main.rs:152:5
6: core::ops::function::FnOnce::call_once
at /home/specy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
```
</p>
</details>
| P-medium,T-compiler,regression-from-stable-to-stable,C-bug,A-mir-opt,I-miscompile | medium | Critical |
2,623,647,450 | material-ui | Variant for slot doesn't work | ### Search keywords
variants, slots, props, evaluation
### Latest version
- [X] I have tested the latest version
### Steps to reproduce
Link to live example: https://stackblitz.com/edit/github-nynfvf?file=src%2Fthene.ts
Steps:
1. Customize theme with `components` => `MuiSelect` => `styleOverrides`
2. Define a variant for `root` and `icon` slots, both with props: `props: { size: 'small' }`
### Current behavior
Variant for root is recognized and applied, for icon - it's not
### Expected behavior
As far as I understand, also a variant for a slot should work.
### Context
I tried for the `icon` a hardcoded props evaluation:
```ts
props: () => true
```
and it works. I also debugged `size` prop evaluation with:
```ts
props: ({ size }) => console.log(size)
```
and initially it has correct value `small` but immediately during next evaluation it's `undefined` What's surprising for me, it behaves exactly the same for `root`, but doesn't break variant's styling for `root` 🤔
### Your environment
_No response_ | bug 🐛,component: select,customization: theme,customization: dom | low | Critical |
2,623,650,174 | PowerToys | (powerrename) let enter key apply the rename, and shift+enter key perform the rename and close | ### Description of the new feature / enhancement
being able to hit enter to apply filename changes in powerrename would simply be convenient and intuitive. and similarly to how the drop down menu from the apply button lets you apply the change then close the window already, being able to do that with shift+enter or ctrl+enter or whatever you may chose would be useful too.
### Scenario when this would be used?
when youre renaming files with powerrename, currently you have to manually click apply after typing in the new filename. it is faster to be able to type then hit enter than to grab the mouse again and click.
### Supporting information
as far as I know newlines within filenames arent a thing nor does enter or shift enter create any newline in powerrename, so this would not overlap with such a case. | Needs-Triage | low | Minor |
2,623,713,797 | next.js | updating head meta data based on the fetched data in app router | ### Link to the code that reproduces this issue
https://github.com/mehrizi/nextjs-dynamic-title
### To Reproduce
Hello and hardworking,
I have simple pages in _App Router_ like `app/page/[id]/page.tsx` in which the `id` is being used in the page component body to fetch some data:
```ts
export default async function Page({ params }) {
const data = await getData(params.id)
```
After this fetch I need to set page title based on the fetched data but `<Head>` component is not working in app router (As per documentation and experience).
The alternative seems to export async function `generateMetadata()` and call the fetch again to get the data!
This means for any such a page (that has dynamicly loaded title) I have to call the server function twice and this leads to double fetch calls.
I searched a lot for any alternative solution but couldn't find any!
### Current vs. Expected behavior
If the `<Head>` component could also work in app router this issue could have been solved!
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #47~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Oct 2 16:16:55 UTC 2
Available memory (MB): 15869
Available CPU cores: 6
Binaries:
Node: 20.12.2
npm: 10.5.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-rc.1 // Latest available version is detected (15.0.0-rc.1).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Metadata, Performance
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
_No response_ | bug,Metadata,Performance | low | Major |
2,623,747,139 | flutter | Edge-To-Edge Progressive Web Apps on mobile | ### Use case
Native apps and iOS progressive web apps can take advantage of Edge-To-Edge display. This is the preferred presentation going forward and there has been recent work getting Flutter/Android to work this way. See [Edge-To-Edge by default on android #86248](https://github.com/flutter/flutter/issues/86248).
This request is to make Edge-To-Edge also work, and eventually be the default when using the web-ui renderer on Android and iOS. To be explicit, this request is to retain the System UI elements, not going fullscreen.
### Proposal
1. Enable/confirm Progressive Web Apps can use Edge-To-Edge in standalone mode, showing the System UI in Chromium on Android. It should work just like a native Android app: https://developer.android.com/develop/ui/views/layout/edge-to-edge
1. Implement Edge-To-Edge in Flutter's web ui.
1. Align Flutter Edge-To-Edge default in the web ui in concert with [Edge-To-Edge by default on android #86248](https://github.com/flutter/flutter/issues/86248).
Progressive Web Apps should be able to present Edge-To-Edge once installed as `standalone`
* https://web.dev/learn/pwa/app-design#safe_areas describes the behavior, but relies on `viewport-fit=cover`
* [MDN Viewport](https://developer.mozilla.org/en-US/docs/Web/HTML/Viewport_meta_tag) doesn't include `viewport-fit=cover`
* This appears to have been introduced by Apple https://webkit.org/blog/7929/designing-websites-for-iphone-x/
* Chromium tracked this change in [issue/40547849](https://issues.chromium.org/issues/40547849) and there appears to be implementation in [issue/40574289](https://issues.chromium.org/issues/40574289) however this might be just `fullscreen` and not `standalone`.
If Chrome on Android is able to display Edge-To-Edge with System UI, I have not been able to achieve it, nor find it demonstrated. Users cannot add `viewport-fit` to the `viewport` meta in the index.html header, as `viewport` is removed in the [FullPageEmbeddingStrategy](https://github.com/flutter/engine/blob/f506558db90f3031093a1bf8be9be9af6df81829/lib/web_ui/lib/src/engine/view_embedder/embedding_strategy/full_page_embedding_strategy.dart#L71)
Thanks for your amazing work on Flutter! | c: new feature,platform-web,c: proposal,P2,browser: chrome-android,team-web,triaged-web,browser: chrome-ios | low | Minor |
2,623,782,622 | terminal | Add an option to change border colors | ### Description of the new feature
Add an option to change border colors
I had this idea of making so you can change the border colors.
This is specially usefull when you have multiple windows and you dont want to use a terminal multiplexer.
### Proposed technical implementation details
I don't know if the way VSCode is made and the Window terminal nad if they are a bit compatible.
There is a VSCode extension called peacock that changes border colors | Help Wanted,Product-Terminal,Issue-Task,Area-Theming | low | Minor |
2,623,792,102 | vscode | Panel: no drop zone for views after the last panel | Steps to Reproduce:
1. drag a view container such as search to the panel
2. try to drop it as last panel
=> this is hard, you have to drop it directly after the last panel, you cannot use any of the empty space to drop

This works fine in primary sidebar and secondary sidebar.
| polish,layout,papercut :drop_of_blood: | low | Minor |
2,623,805,874 | excalidraw | Add CJK fallback for other fonts | - For Code (Comic Shanns) consider using Xiaolai Mono https://github.com/lxgw/kose-font/releases/tag/v3.120
- For Normal (Nunito), could we find a better fit than Xiaolai?
- For Lilita, could we find a similarly looking bold CJK font? Perhaps very bold Xiaolai?
Follows #8408 | enhancement,font | low | Minor |
2,623,827,010 | flutter | Allow more control of TextInputControls in EditableText | ### Use case
Some apps need the option of using their own keyboard for security reasons. You often do not want other software or hardware keyboards to be able to track what is currently being entered in a text field.
Possible use cases are, for example, banking apps or other apps that have to meet certain security requirements.
We are currently unable to publish our Flutter app because a requirement for this is that external input is not allowed. The only possibility at the moment is to implement `EditableText` and `TextField`/`TextFormField` ourselves.
### Proposal
It could be made possible for `EditableTexts` or `TextFields` to control which `TextInputs` can modify them or receive updates from `TextEditingValues` or similar. For example, you could give each `TextInputConnection` the option of defining its own filter function for all permitted `TextInputControls`. Alternatively you could at least provide the option to prohibit external `TextInputs` such as `_PlatformTextInputControl.instance`. | a: text input,c: new feature,c: proposal,P3,team-text-input,triaged-text-input | low | Major |
2,623,959,823 | pytorch | Possible bug of tools::flight_recorder | ### 🐛 Describe the bug
Hello,
I'm a new user of PyTorch and recently tried to run the Flight Recorder code provided in the tools. But I cannot get the code to execute as expected.
I use ngc 24.10 container, and all pytorch code of commit e000cf0ad980e5d140dc895a646174e9b945cf26.
I run Megatron Pretrain_gpt.py in single node with 8 GPUs.
I set NCCL_SHM_DISABLE=1 and NCCL_P2P_DISABLE=1 to force communicating by IBNetwork.
I try to solve the problem, but still have new errors, so I would like to ask if this is a problem with my usage or if there may be some bugs in Flight Recorder.
Thank you very very much for reply.
---
1. find_coalesced_group() cannot append the last event in coalesced_group.
In my test, the last event in coalesced_group's::'is_group' is False, then it cannot be append to list in find_coalesced_group(). And then error...
```{'frames': [{'name': 'isend', 'filename': '/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py', 'line': 2064}, {'name': 'batch_isend_irecv', 'filename': '/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py', 'line': 2374}, {'name': '_batched_p2p_ops', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/p2p_communication.py', 'line': 153}, {'name': '_communicate', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/p2p_communication.py', 'line': 370}, {'name': 'send_forward_recv_backward', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/p2p_communication.py', 'line': 510}, {'name': 'send_forward_recv_backward', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/schedules.py', 'line': 1252}, {'name': 'forward_backward_pipelining_without_interleaving', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/schedules.py', 'line': 1467}, {'name': 'train_step', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/training/training.py', 'line': 731}, {'name': 'train', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/training/training.py', 'line': 1247}, {'name': 'pretrain', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/training/training.py', 'line': 361}, {'name': '<module>', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/pretrain_gpt.py', 'line': 265}],
'record_id': 5054, 'pg_id': 7, 'process_group': ('33', 'undefined'), 'collective_seq_id': 58, 'p2p_seq_id': 0, 'op_id': 92, 'profiling_name': 'nccl:send 0->1', 'time_created_ns': 1730266770405425544, 'input_sizes': [[1024, 1, 8192]], 'input_dtypes': ['BFloat16'], 'output_sizes': [[1024, 1, 8192]], 'output_dtypes': ['BFloat16'], 'state': 'scheduled', 'time_discovered_started_ns': None, 'time_discovered_completed_ns': None, 'retired': False, 'timeout_ms': 600000, 'is_p2p': True},
{'frames': [{'name': 'irecv', 'filename': '/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py', 'line': 2112}, {'name': 'batch_isend_irecv', 'filename': '/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py', 'line': 2374}, {'name': '_batched_p2p_ops', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/p2p_communication.py', 'line': 153}, {'name': '_communicate', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/p2p_communication.py', 'line': 370}, {'name': 'send_forward_recv_backward', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/p2p_communication.py', 'line': 510}, {'name': 'send_forward_recv_backward', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/schedules.py', 'line': 1252}, {'name': 'forward_backward_pipelining_without_interleaving', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/schedules.py', 'line': 1467}, {'name': 'train_step', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/training/training.py', 'line': 731}, {'name': 'train', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/training/training.py', 'line': 1247}, {'name': 'pretrain', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/training/training.py', 'line': 361}, {'name': '<module>', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/pretrain_gpt.py', 'line': 265}],
'record_id': 5055, 'pg_id': 7, 'process_group': ('33', 'undefined'), 'collective_seq_id': 58, 'p2p_seq_id': 0, 'op_id': 93, 'profiling_name': 'nccl:recv 0<-1', 'time_created_ns': 1730266770405447264, 'input_sizes': [[1024, 1, 8192]], 'input_dtypes': ['BFloat16'], 'output_sizes': [[1024, 1, 8192]], 'output_dtypes': ['BFloat16'], 'state': 'scheduled', 'time_discovered_started_ns': None, 'time_discovered_completed_ns': None, 'retired': False, 'timeout_ms': 600000, 'is_p2p': True},
{'frames': [{'name': '_coalescing_manager', 'filename': '/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py', 'line': 2319}, {'name': '__exit__', 'filename': '/usr/lib/python3.10/contextlib.py', 'line': 142}, {'name': 'batch_isend_irecv', 'filename': '/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py', 'line': 2372}, {'name': '_batched_p2p_ops', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/p2p_communication.py', 'line': 153}, {'name': '_communicate', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/p2p_communication.py', 'line': 370}, {'name': 'send_forward_recv_backward', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/p2p_communication.py', 'line': 510}, {'name': 'send_forward_recv_backward', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/schedules.py', 'line': 1252}, {'name': 'forward_backward_pipelining_without_interleaving', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/core/pipeline_parallel/schedules.py', 'line': 1467}, {'name': 'train_step', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/training/training.py', 'line': 731}, {'name': 'train', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/training/training.py', 'line': 1247}, {'name': 'pretrain', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/megatron/training/training.py', 'line': 361}, {'name': '<module>', 'filename': '/workspace/deep_learning_examples/thirdparty/Megatron-LM/pretrain_gpt.py', 'line': 265}],
'record_id': 5056, 'pg_id': 7, 'process_group': ('33', 'undefined'), 'collective_seq_id': 58, 'p2p_seq_id': 0, 'op_id': 93, 'profiling_name': 'nccl:coalesced', 'time_created_ns': 1730266770405457818, 'duration_ms': 4.395232200622559, 'input_sizes': [], 'input_dtypes': [], 'output_sizes': [], 'output_dtypes': [], 'state': 'completed', 'time_discovered_started_ns': 1730266770561774281, 'time_discovered_completed_ns': 1730266770561776108, 'retired': True, 'timeout_ms': 600000, 'is_p2p': False},
```
2. dst_global_rank in match_coalesced_groups() is wrong in "receive" op.
In match_coalesced_groups(), `dst_global_rank = sorted(memberships[op.pg_name])[op.dst]`.
But when op.type=="receive", op.dst is current rank, match() will fail after it.
3. `rank, event = all_rank_events[r][i]` in visualize_ops().
I see the first element of all_rank_events[r][i] is index of prev entries, why it has assigned to rank?
4. ....
I see other problems, but I donot know whether I havenot understood the code or it has bugs.
So I eagerly need your help and discussion! Thank u again~
### Versions
**ngc 24.10 container**
all pytorch code of commit **e000cf0ad980e5d140dc895a646174e9b945cf26**
Megatron Pretrain_gpt.py in single node with 8 GPUs.
NCCL_SHM_DISABLE=1.
NCCL_P2P_DISABLE=1.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.