id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,554,028,790 | excalidraw | Whitelist Url https://web.tutar.app for Embedding in Excalidraw | Hi, we are unable to embed the website https://web.tutar.app/ in Excalidraw. Kindly whitelist this url for embedding.
<img width="1428" alt="Screenshot 2024-09-28 at 11 27 56 AM" src="https://github.com/user-attachments/assets/3d06497f-347e-4f8c-bae1-87fbf57991be">
| Embeddable | low | Minor |
2,554,034,224 | ui | [feat]: Quickstart with Electron | ### Feature description
It would be nice to have a quickstart for Electron
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,554,045,486 | ollama | amd-llama-135M | https://huggingface.co/amd/AMD-Llama-135m
Fully open source with opensource licence and open source data set | model request | low | Minor |
2,554,056,062 | go | cmd/go: modernize the output of 'go help packages' | ### Go version
go version go1.23-20240626-RC01 cl/646990413 +5a18e79687 X:fieldtrack,boringcrypto linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/usr/local/google/home/mtp/.cache/go-build'
GOENV='/usr/local/google/home/mtp/.config/go/env'
GOEXE=''
GOEXPERIMENT='fieldtrack,boringcrypto'
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/usr/local/google/home/mtp/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/usr/local/google/home/mtp/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/lib/google-golang'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/lib/google-golang/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23-20240626-RC01 cl/646990413 +5a18e79687 X:fieldtrack,boringcrypto'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/usr/local/google/home/mtp/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2230501016=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Run: `go help packages`.
### What did you see happen?
The command output:
```
Many commands apply to a set of packages:
go <action> [packages]
Usually, [packages] is a list of import paths.
An import path that is a rooted path or that begins with
a . or .. element is interpreted as a file system path and
denotes the package in that directory.
Otherwise, the import path P denotes the package found in
the directory DIR/src/P for some DIR listed in the GOPATH
environment variable (For more details see: 'go help gopath').
If no import paths are given, the action applies to the
package in the current directory.
There are four reserved names for paths that should not be used
for packages to be built with the go tool:
- "main" denotes the top-level package in a stand-alone executable.
- "all" expands to all packages in the main module (or workspace modules) and
their dependencies, including dependencies needed by tests of any of those. In
GOPATH mode, "all" expands to all packages found in all the GOPATH trees.
- "std" is like all but expands to just the packages in the standard
Go library.
- "cmd" expands to the Go repository's commands and their
internal libraries.
Package names that correspond to complete import paths found in the standard
library can be used as patterns without further qualification. For instance,
"fmt" refers to the standard library's package fmt, but "http" alone for
package http could not be used to refer to import path "net/http". Instead,
the complete pattern "net/http" must be used.
Import paths beginning with "cmd/" only match source code in
the Go repository.
An import path is a pattern if it includes one or more "..." wildcards,
each of which can match any string, including the empty string and
strings containing slashes. Such a pattern expands to all package
directories found in the GOPATH trees with names matching the
patterns.
To make common patterns more convenient, there are two special cases.
First, /... at the end of the pattern can match an empty string,
so that net/... matches both net and packages in its subdirectories, like net/http.
Second, any slash-separated pattern element containing a wildcard never
participates in a match of the "vendor" element in the path of a vendored
package, so that ./... does not match packages in subdirectories of
./vendor or ./mycode/vendor, but ./vendor/... and ./mycode/vendor/... do.
Note, however, that a directory named vendor that itself contains code
is not a vendored package: cmd/vendor would be a command named vendor,
and the pattern cmd/... matches it.
See golang.org/s/go15vendor for more about vendoring.
An import path can also name a package to be downloaded from
a remote repository. Run 'go help importpath' for details.
Every package in a program must have a unique import path.
By convention, this is arranged by starting each path with a
unique prefix that belongs to you. For example, paths used
internally at Google all begin with 'google', and paths
denoting remote repositories begin with the path to the code,
such as 'github.com/user/repo'. Package patterns should include this prefix.
For instance, a package called 'http' residing under 'github.com/user/repo',
would be addressed with the fully-qualified pattern:
'github.com/user/repo/http'.
Packages in a program need not have unique package names,
but there are two reserved package names with special meaning.
The name main indicates a command, not a library.
Commands are built into binaries and cannot be imported.
The name documentation indicates documentation for
a non-Go program in the directory. Files in package documentation
are ignored by the go command.
As a special case, if the package list is a list of .go files from a
single directory, the command is applied to a single synthesized
package made up of exactly those files, ignoring any build constraints
in those files and ignoring any other files in the directory.
Directory and file names that begin with "." or "_" are ignored
by the go tool, as are directories named "testdata".
```
### What did you expect to see?
While working on #69653, @robpike and I noted that the output of `go help packages` appears to be in need of modernization. A couple of key points regarding the output:
1. It lacks coverage of the module concept (see Gerrit review log at https://go-review.googlesource.com/c/go/+/616257 for the precise commentary on what was confusing).
2. The quotation style is internally inconsistent and inconsistent with some of the other long-form help outputs (e.g., `go help testflag`).
3. There are a few elements that are not quoted that should be for visual clarity.
Overall, the information feels correct for mid-2010s vintage Go but less so for today's version, so a fresh pair of eyes would help considerably.
I would be happy to contribute a secondary review of any changes to this documentation or contribute nos. 2–3. I feel less comfortable to address Rob's commentary on the modules aspect myself. | Documentation,NeedsInvestigation | low | Critical |
2,554,099,046 | godot | The sprite texture in the inherited scene disappears for some reason | ### Tested versions
Godot v4.3.stable
### System information
Godot v4.3.stable - macOS 14.6.1 - Vulkan (Mobile) - integrated Apple M2 - Apple M2 (8 Threads)
### Issue description
Error: scene/resources/packed_scene.cpp:254 - Node 'StaticBody2D/Sprite2D' was modified from inside an instance, but it has vanished.
Node not found: "StaticBody2D/Sprite2D" (relative to "Object").
https://github.com/user-attachments/assets/a93a468b-7c4d-4070-9cc6-223ea7debae7
### Steps to reproduce
https://github.com/user-attachments/assets/9d786b54-7f65-4a51-a9ed-ac8be653c7eb
### Minimal reproduction project (MRP)
N/A | topic:core,needs testing | low | Critical |
2,554,102,757 | next.js | Next Font: {randomlowercaseletter}.default / {randomlowercaseletter}.{name of a google font} is not a function | ### Link to the code that reproduces this issue
https://github.com/JackatDJL/Athenetz-SV/tree/athe-12-next-font-randomlowercaseletterdefault
### To Reproduce
git clone https://github.com/JackatDJL/Athenetz-SV.git
cd Athenetz-SV
git fetch
git checkout athe-12-next-font-randomlowercaseletterdefault
corepack enable
yarn install
yarn build
### Current vs. Expected behavior
Im in Pain ;(
i traced the issue to somewhere in next/font
if i use …/google with a google font like oxanium i get this error (tested on windows, linux and vercel) {}.{fontname}
```
@athenetz-sv/wahlen:build: at 59962 (/vercel/path0/apps/wahlen/.next/server/app/page.js:54:122230)
00:24:41.560@athenetz-sv/wahlen:build: at t (/vercel/path0/apps/wahlen/.next/server/webpack-runtime.js:1:128)
00:24:41.560@athenetz-sv/wahlen:build: at 11215 (/vercel/path0/apps/wahlen/.next/server/app/page.js:54:121166)
00:24:41.560@athenetz-sv/wahlen:build: at t (/vercel/path0/apps/wahlen/.next/server/webpack-runtime.js:1:128)
00:24:41.560@athenetz-sv/wahlen:build: at 26658 (/vercel/path0/apps/wahlen/.next/server/app/page.js:1:3999)
00:24:41.560@athenetz-sv/wahlen:build: at Object.t [as require] (/vercel/path0/apps/wahlen/.next/server/webpack-runtime.js:1:128)
00:24:41.560@athenetz-sv/wahlen:build: at require (/vercel/path0/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:16:18228)
00:24:41.560@athenetz-sv/wahlen:build: at i (/vercel/path0/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:12:88294)
00:24:41.560@athenetz-sv/wahlen:build: at /vercel/path0/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:12:98817
00:24:41.560@athenetz-sv/wahlen:build: at /vercel/path0/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:12:98904
00:24:41.560@athenetz-sv/wahlen:build: Generating static pages (3/4)
00:24:41.560@athenetz-sv/wahlen:build: ✓ Generating static pages (4/4)
00:24:41.584@athenetz-sv/wahlen:build:
00:24:41.584@athenetz-sv/wahlen:build: > Export encountered errors on following paths:
00:24:41.585@athenetz-sv/wahlen:build: /page: /
00:24:41.677@athenetz-sv/wahlen:build: ERROR: command finished with error: command (/vercel/path0/apps/wahlen) /yarn1/node_modules/yarn/bin/yarn run build exited (1)
00:24:41.677@athenetz-sv/wahlen#build: command (/vercel/path0/apps/wahlen) /yarn1/node_modules/yarn/bin/yarn run build exited (1)
00:24:41.680
00:24:41.680 Tasks: 5 successful, 6 total
00:24:41.680 Cached: 5 cached, 6 total
00:24:41.680 Time: 21.098s
00:24:41.681Summary: /vercel/path0/.turbo/runs/2mfmD52ot720RrLjZ4R4lRNAc7Y.json
00:24:41.681 Failed: @athenetz-sv/wahlen#build
00:24:41.681
```
and if i try to use localfonts instead i get {}.default
```
@athenetz-sv/wahlen:build: TypeError: (0 , r.default) is not a function
01:18:56.873@athenetz-sv/wahlen:build: at 40134 (/vercel/path0/apps/wahlen/.next/server/app/_not-found/page.js:1:6048)
01:18:56.873@athenetz-sv/wahlen:build: at t (/vercel/path0/apps/wahlen/.next/server/webpack-runtime.js:1:128)
01:18:56.873@athenetz-sv/wahlen:build: at 77718 (/vercel/path0/apps/wahlen/.next/server/app/_not-found/page.js:1:3036)
01:18:56.873@athenetz-sv/wahlen:build: at t (/vercel/path0/apps/wahlen/.next/server/webpack-runtime.js:1:128)
01:18:56.873@athenetz-sv/wahlen:build: at 53219 (/vercel/path0/apps/wahlen/.next/server/app/_not-found/page.js:1:645)
01:18:56.873@athenetz-sv/wahlen:build: at t (/vercel/path0/apps/wahlen/.next/server/webpack-runtime.js:1:128)
01:18:56.874@athenetz-sv/wahlen:build: at o (/vercel/path0/apps/wahlen/.next/server/app/_not-found/page.js:1:6651)
01:18:56.874@athenetz-sv/wahlen:build: at /vercel/path0/apps/wahlen/.next/server/app/_not-found/page.js:1:6677
01:18:56.874@athenetz-sv/wahlen:build: at t.X (/vercel/path0/apps/wahlen/.next/server/webpack-runtime.js:1:1196)
01:18:56.874@athenetz-sv/wahlen:build: at /vercel/path0/apps/wahlen/.next/server/app/_not-found/page.js:1:6664
01:18:56.875@athenetz-sv/wahlen:build:
01:18:56.875@athenetz-sv/wahlen:build: > Build error occurred
01:18:56.877@athenetz-sv/wahlen:build: Error: Failed to collect page data for /_not-found
01:18:56.878@athenetz-sv/wahlen:build: at /vercel/path0/node_modules/next/dist/build/utils.js:1268:15
01:18:56.878@athenetz-sv/wahlen:build: at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
01:18:56.878@athenetz-sv/wahlen:build: type: 'Error'
01:18:56.878@athenetz-sv/wahlen:build: }
01:18:56.973@athenetz-sv/wahlen:build: ERROR: command finished with error: command (/vercel/path0/apps/wahlen) /yarn1/node_modules/yarn/bin/yarn run build exited (1)
01:18:56.973@athenetz-sv/wahlen#build: command (/vercel/path0/apps/wahlen) /yarn1/node_modules/yarn/bin/yarn run build exited (1)
01:18:56.976
01:18:56.976 Tasks: 5 successful, 6 total
01:18:56.977 Cached: 0 cached, 6 total
01:18:56.977 Time: 1m17.159s
01:18:56.977Summary: /vercel/path0/.turbo/runs/2mfso4wRRMrIX3sJrwMoSfT5u1K.json
01:18:56.977 Failed: @athenetz-sv/wahlen#build
```
These are the outputs from vercel with version 14.2.12
and this is the local output with version 15.0.0-canary.172
```
┌ @athenetz-sv/wahlen#build > cache miss, executing 9f037a7a61ca03e7
│ ▲ Next.js 15.0.0-canary.172
│ - Environments: .env.local
│
│ Creating an optimized production build ...
│ ⚠ Compiled with warnings
│
│ ../../node_modules/framer-motion/dist/cjs/index.js
│ Module not found: Can't resolve '@emotion/is-prop-valid' in 'D:\dev\Athenetz-SV\node_modules\framer-motion\dist\cjs'
│
│ Import trace for requested module:
│ ../../node_modules/framer-motion/dist/cjs/index.js
│ ../../packages/SV-UI/dist/layout/FloatingPanel.js
│ ./src/app/page.tsx
│
│ ✓ Linting and checking validity of types
│ Collecting page data ..TypeError: (0 , o.default) is not a function
│ at 21459 (D:\dev\Athenetz-SV\apps\wahlen.next\server\app_not-found\page.js:1:5028)
│ at t (D:\dev\Athenetz-SV\apps\wahlen.next\server\webpack-runtime.js:1:128)
│ at D:\dev\Athenetz-SV\apps\wahlen.next\server\app_not-found\page.js:1:3096
│ at t.a (D:\dev\Athenetz-SV\apps\wahlen.next\server\webpack-runtime.js:1:881)
│ at 76560 (D:\dev\Athenetz-SV\apps\wahlen.next\server\app_not-found\page.js:1:2837)
│ at Function.t (D:\dev\Athenetz-SV\apps\wahlen.next\server\webpack-runtime.js:1:128)
│ at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
│ at async collectGenerateParams (D:\dev\Athenetz-SV\node_modules\next\dist\build\utils.js:937:21)
│ at async D:\dev\Athenetz-SV\node_modules\next\dist\build\utils.js:1172:17
│ at async Span.traceAsyncFn (D:\dev\Athenetz-SV\node_modules\next\dist\trace\trace.js:157:20)
│
│ > Build error occurred
│ Error: Failed to collect page data for /_not-found
│ at D:\dev\Athenetz-SV\node_modules\next\dist\build\utils.js:1272:15
│ at process.processTicksAndRejections (node:internal/process/task_queues:105:5) {
│ type: 'Error'
│ }
│ Collecting page data .
│ command finished with error: command (D:\dev\Athenetz-SV\apps\wahlen) C:\Users\jackr\AppData\Local\Temp\xfs-cd78823a\yarn.cmd run build ex
│ ited (1)
└────>
```
@athenetz-sv/wahlen#build: command (D:\dev\Athenetz-SV\apps\wahlen) C:\Users\jackr\AppData\Local\Temp\xfs-cd78823a\yarn.cmd run build exited (1)
ty for reading the entire thing ❤
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 32760
Available CPU cores: 12
Binaries:
Node: 22.9.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.172
eslint-config-next: 14.2.13
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Font (next/font)
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), Vercel (Deployed)
### Additional context
Original issue with Next.js 14.2.12
Still a issue in Next.js v15.0.0-canary.172
i traced the issue to somewhere in next/font
tested and failed on windows, linux (codepace) and vercel | bug,Font (next/font) | low | Critical |
2,554,103,390 | godot | Unexpected errors when creating SpriteFrames:Condition “plugins_list.has(p_plugin)" is true. | ### Tested versions
- Reproducible:Godot4.0 stable, Godot4.0.1stable ,Godot4.0.2stable ,Godot 4.0.3stable ,Godot 4.0.4stable,Godot 4.1stable,Godot 4.1.1stable,Godot 4.1.2stable,Godot4.1.4stable,Godot4.2stable,Godot4.2.1stable ,Godot4.2.2stable,Godot4.3stable,Godot4.4.1dev,Godot4.4.2dev
- Not Reproducible:Godot3.6 stable
### System information
Windows10 - Godot 4.4.dev2 - Vulkan(Forward+)
### Issue description
<img width="1501" alt="4 4 2" src="https://github.com/user-attachments/assets/6925652b-c0f1-4a1b-8ab2-252e1133dfe0">
<img width="1920" alt="4 2 2" src="https://github.com/user-attachments/assets/6cb925bd-5be0-4a0b-9bc8-7e5a01405a72">
https://github.com/user-attachments/assets/e57d4a80-f46e-4839-951b-1f6612c5ac5f
After creating an AnimatedSprite2D, I added a new SpriteFrames. Clicking on it again triggers the warning: **'editor/editor_node.cpp:8116 - Condition "plugins_list.has(p_plugin)" is true'**. This bug also exists in Godot 4.3.1 stable and every godot 4 version
### Steps to reproduce
https://github.com/user-attachments/assets/a4abda81-5a10-499f-89b1-f56d49e47ec2
https://github.com/user-attachments/assets/a4abda81-5a10-499f-89b1-f56d49e47ec2
0.Open Godot 4.4 dev2.
1.Create a new scene with any type of root node (in this example, it's Node2D).
2.Add a child node of type AnimatedSprite2D.
3.Click on the AnimatedSprite2D node, and in the inspector under the AnimatedSprite2D section, click on Animation/Sprite Frames and create a new SpriteFrames.
4.**(Important) Click on the root node, then click back on the AnimatedSprite2D node, and click on Animation/Sprite Frames in the inspector under the AnimatedSprite2D section.**,
5.Check the Output.
6.Click on the root node again, then click back on the AnimatedSprite2D node. Repeat this step, and you will see that every time you click on the AnimatedSprite2D node, the error "editor/editor_node.cpp:8116 - Condition 'plugins_list.has(p_plugin)' is true" will be triggered in the Output.
### Minimal reproduction project (MRP)
[mrp-for-the-editor-editor_node.cpp-8116---condition--plugins_list.has(p_plugin)--is-true'.zip](https://github.com/user-attachments/files/17173715/mrp-for-the-editor-editor_node.cpp-8116---condition--plugins_list.has.p_plugin.--is-true.zip)
| bug,topic:editor | low | Critical |
2,554,113,764 | stable-diffusion-webui | [Bug]: Cant install on AMD navi 1, pytorch http 403 error | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Cant install at all on my AMD RX5700XT because it says:
` ERROR: HTTP error 403 while getting https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl
ERROR: Could not install requirement torch==2.0.0.dev20230209+rocm5.2 from https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl because of HTTP error 403 Client Error: Forbidden for url: https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl for URL https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl
`
while trying to install webui
### Steps to reproduce the problem
1. Try to install natively on ubuntu
### What should have happened?
The Webui should have installed pytorch 2.0.0 rocm5.2.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
Can´t do that, doesn´t work
### Console logs
```Shell
./webui.sh --listen --opt-sub-quad-attention --lowvram --precision full --no-half
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on steffi user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.39
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Python 3.10.15 (main, Sep 7 2024, 18:35:38) [GCC 13.2.0]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing torch and torchvision
Collecting torch==2.0.0.dev20230209+rocm5.2
ERROR: HTTP error 403 while getting https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl
ERROR: Could not install requirement torch==2.0.0.dev20230209+rocm5.2 from https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl because of HTTP error 403 Client Error: Forbidden for url: https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl for URL https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl
[notice] A new release of pip is available: 23.0.1 -> 24.2
[notice] To update, run: pip install --upgrade pip
Traceback (most recent call last):
File "/home/steffi/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/home/steffi/stable-diffusion-webui/launch.py", line 39, in main
prepare_environment()
File "/home/steffi/stable-diffusion-webui/modules/launch_utils.py", line 381, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
File "/home/steffi/stable-diffusion-webui/modules/launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "/home/steffi/stable-diffusion-webui/venv/bin/python" -m pip install https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/nightly/rocm5.2/torchvision-0.15.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl
Error code: 1
```
### Additional information
_No response_ | bug-report | low | Critical |
2,554,122,119 | rust | ICE: `expected wide pointer extra data` | <!--
[31mICE[0m: Rustc ./a.rs '-Zmir-opt-level=5 -Zvalidate-mir -ooutputfile -Zdump-mir-dir=dir' ' | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the evaluated program panicked at 'from_str_radix_int: must lie in the range `[2, 36]`', /rustc/851f698682aa2e4c226e1a2c1af30adbcb63aae2/library/core/src/num/mod.rs:1563:1 ' |'', ' | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the evaluated program panicked at 'from_str_radix_int: must lie in the range `[2, 36]`', /rustc/851f698682aa2e4c226e1a2c1af30adbcb63aae2/library/core/src/num/mod.rs:1563:1 ' |''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
fn main() {
extern "C" {
static symbol: [usize];
}
println!("{}", symbol[0]);
}
````
original:
````rust
//~ ERROR rustc_outlives
const _: bool = false && false;
const _: bool = true && false;
const _: bool = {
let mut x = true && false;
x
};
const _TOO_LOW: () = { u64::from_str_radix("12345ABCD" 1); };
fn main() {
extern "C" {
static symbol: [usize]; //~ ERROR: the size for values of type
}
println!("{}", symbol[0]);
//~^ ERROR: extern static is unsafe
}
````
Version information
````
rustc 1.83.0-nightly (851f69868 2024-09-28)
binary: rustc
commit-hash: 851f698682aa2e4c226e1a2c1af30adbcb63aae2
commit-date: 2024-09-28
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zmir-opt-level=5 -Zvalidate-mir`
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0277]: the size for values of type `[usize]` cannot be known at compilation time
--> /tmp/icemaker_global_tempdir.UzS9t28kBj7Z/rustc_testrunner_tmpdir_reporting.ClR4SdutAcRi/mvce.rs:3:24
|
3 | static symbol: [usize];
| ^^^^^^^ doesn't have a size known at compile-time
|
= help: the trait `Sized` is not implemented for `[usize]`
error[E0133]: use of extern static is unsafe and requires unsafe function or block
--> /tmp/icemaker_global_tempdir.UzS9t28kBj7Z/rustc_testrunner_tmpdir_reporting.ClR4SdutAcRi/mvce.rs:5:20
|
5 | println!("{}", symbol[0]);
| ^^^^^^ use of extern static
|
= note: extern statics are not controlled by the Rust type system: invalid data, aliasing violations or data races will cause undefined behavior
error: internal compiler error: /rustc/851f698682aa2e4c226e1a2c1af30adbcb63aae2/compiler/rustc_const_eval/src/interpret/place.rs:36:17: expected wide pointer extra data (e.g. slice length or trait object vtable)
thread 'rustc' panicked at /rustc/851f698682aa2e4c226e1a2c1af30adbcb63aae2/compiler/rustc_const_eval/src/interpret/place.rs:36:17:
Box<dyn Any>
stack backtrace:
0: 0x7407a38ceaba - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h7900a3ecde60f42e
1: 0x7407a40037e6 - core::fmt::write::h550ce993e33b1b64
2: 0x7407a51ae991 - std::io::Write::write_fmt::h934b266016a01ab9
3: 0x7407a38ce912 - std::sys::backtrace::BacktraceLock::print::hfed7190cfb479085
4: 0x7407a38d0e31 - std::panicking::default_hook::{{closure}}::hf3239a9153976196
5: 0x7407a38d0c64 - std::panicking::default_hook::ha9ae936b7eae362b
6: 0x7407a299a03f - std[7abf0287877b87c3]::panicking::update_hook::<alloc[dcdda265b4d77909]::boxed::Box<rustc_driver_impl[2b0d10d4d558aac7]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7407a38d1548 - std::panicking::rust_panic_with_hook::h62359f8c356e73f3
8: 0x7407a29d41e1 - std[7abf0287877b87c3]::panicking::begin_panic::<rustc_errors[47f0f04182922d6e]::ExplicitBug>::{closure#0}
9: 0x7407a29c7286 - std[7abf0287877b87c3]::sys::backtrace::__rust_end_short_backtrace::<std[7abf0287877b87c3]::panicking::begin_panic<rustc_errors[47f0f04182922d6e]::ExplicitBug>::{closure#0}, !>
10: 0x7407a29c2769 - std[7abf0287877b87c3]::panicking::begin_panic::<rustc_errors[47f0f04182922d6e]::ExplicitBug>
11: 0x7407a29dda71 - <rustc_errors[47f0f04182922d6e]::diagnostic::BugAbort as rustc_errors[47f0f04182922d6e]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7407a30020a4 - rustc_middle[8066596e1db03fa9]::util::bug::opt_span_bug_fmt::<rustc_span[19f86c6d4b17e07c]::span_encoding::Span>::{closure#0}
13: 0x7407a2fe7dea - rustc_middle[8066596e1db03fa9]::ty::context::tls::with_opt::<rustc_middle[8066596e1db03fa9]::util::bug::opt_span_bug_fmt<rustc_span[19f86c6d4b17e07c]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x7407a2fe7c7b - rustc_middle[8066596e1db03fa9]::ty::context::tls::with_context_opt::<rustc_middle[8066596e1db03fa9]::ty::context::tls::with_opt<rustc_middle[8066596e1db03fa9]::util::bug::opt_span_bug_fmt<rustc_span[19f86c6d4b17e07c]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x7407a05cd630 - rustc_middle[8066596e1db03fa9]::util::bug::bug_fmt
16: 0x7407a312dcf3 - <rustc_const_eval[45b572bde6308908]::interpret::place::MPlaceTy as rustc_const_eval[45b572bde6308908]::interpret::projection::Projectable<rustc_middle[8066596e1db03fa9]::mir::interpret::pointer::CtfeProvenance>>::len::<rustc_const_eval[45b572bde6308908]::const_eval::dummy_machine::DummyMachine>
17: 0x7407a4a53010 - <rustc_mir_transform[7719501850dfb079]::gvn::VnState>::insert
18: 0x7407a4a4519c - <rustc_mir_transform[7719501850dfb079]::gvn::VnState>::simplify_rvalue
19: 0x7407a16394e4 - <rustc_mir_transform[7719501850dfb079]::gvn::GVN as rustc_mir_transform[7719501850dfb079]::pass_manager::MirPass>::run_pass
20: 0x7407a400bd8d - rustc_mir_transform[7719501850dfb079]::pass_manager::run_passes_inner
21: 0x7407a4a075e2 - rustc_mir_transform[7719501850dfb079]::optimized_mir
22: 0x7407a4a05e9d - rustc_query_impl[6f1e8f04bf490f6a]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[6f1e8f04bf490f6a]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[8066596e1db03fa9]::query::erase::Erased<[u8; 8usize]>>
23: 0x7407a402fa38 - rustc_query_system[808f2ed519714d18]::query::plumbing::try_execute_query::<rustc_query_impl[6f1e8f04bf490f6a]::DynamicConfig<rustc_query_system[808f2ed519714d18]::query::caches::DefIdCache<rustc_middle[8066596e1db03fa9]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[6f1e8f04bf490f6a]::plumbing::QueryCtxt, false>
24: 0x7407a402eff3 - rustc_query_impl[6f1e8f04bf490f6a]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
25: 0x7407a1480699 - <rustc_middle[8066596e1db03fa9]::ty::context::TyCtxt>::instance_mir
26: 0x7407a43a47dc - rustc_interface[a8f766607a628292]::passes::run_required_analyses
27: 0x7407a4b068de - rustc_interface[a8f766607a628292]::passes::analysis
28: 0x7407a4b068b1 - rustc_query_impl[6f1e8f04bf490f6a]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[6f1e8f04bf490f6a]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[8066596e1db03fa9]::query::erase::Erased<[u8; 1usize]>>
29: 0x7407a4e7892e - rustc_query_system[808f2ed519714d18]::query::plumbing::try_execute_query::<rustc_query_impl[6f1e8f04bf490f6a]::DynamicConfig<rustc_query_system[808f2ed519714d18]::query::caches::SingleCache<rustc_middle[8066596e1db03fa9]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[6f1e8f04bf490f6a]::plumbing::QueryCtxt, false>
30: 0x7407a4e7860f - rustc_query_impl[6f1e8f04bf490f6a]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
31: 0x7407a4d1825e - rustc_interface[a8f766607a628292]::interface::run_compiler::<core[9c5d060eac7beb80]::result::Result<(), rustc_span[19f86c6d4b17e07c]::ErrorGuaranteed>, rustc_driver_impl[2b0d10d4d558aac7]::run_compiler::{closure#0}>::{closure#1}
32: 0x7407a4de3290 - std[7abf0287877b87c3]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[a8f766607a628292]::util::run_in_thread_with_globals<rustc_interface[a8f766607a628292]::util::run_in_thread_pool_with_globals<rustc_interface[a8f766607a628292]::interface::run_compiler<core[9c5d060eac7beb80]::result::Result<(), rustc_span[19f86c6d4b17e07c]::ErrorGuaranteed>, rustc_driver_impl[2b0d10d4d558aac7]::run_compiler::{closure#0}>::{closure#1}, core[9c5d060eac7beb80]::result::Result<(), rustc_span[19f86c6d4b17e07c]::ErrorGuaranteed>>::{closure#0}, core[9c5d060eac7beb80]::result::Result<(), rustc_span[19f86c6d4b17e07c]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[9c5d060eac7beb80]::result::Result<(), rustc_span[19f86c6d4b17e07c]::ErrorGuaranteed>>
33: 0x7407a4de38fa - <<std[7abf0287877b87c3]::thread::Builder>::spawn_unchecked_<rustc_interface[a8f766607a628292]::util::run_in_thread_with_globals<rustc_interface[a8f766607a628292]::util::run_in_thread_pool_with_globals<rustc_interface[a8f766607a628292]::interface::run_compiler<core[9c5d060eac7beb80]::result::Result<(), rustc_span[19f86c6d4b17e07c]::ErrorGuaranteed>, rustc_driver_impl[2b0d10d4d558aac7]::run_compiler::{closure#0}>::{closure#1}, core[9c5d060eac7beb80]::result::Result<(), rustc_span[19f86c6d4b17e07c]::ErrorGuaranteed>>::{closure#0}, core[9c5d060eac7beb80]::result::Result<(), rustc_span[19f86c6d4b17e07c]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[9c5d060eac7beb80]::result::Result<(), rustc_span[19f86c6d4b17e07c]::ErrorGuaranteed>>::{closure#1} as core[9c5d060eac7beb80]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
34: 0x7407a4de3ceb - std::sys::pal::unix::thread::Thread::new::thread_start::ha5204e3fc18d4472
35: 0x7407a656039d - <unknown>
36: 0x7407a65e549c - <unknown>
37: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.83.0-nightly (851f69868 2024-09-28) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z mir-opt-level=5 -Z validate-mir -Z dump-mir-dir=dir
query stack during panic:
#0 [optimized_mir] optimizing MIR for `main`
#1 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 3 previous errors
Some errors have detailed explanations: E0133, E0277.
For more information about an error, try `rustc --explain E0133`.
```
</p>
</details>
<!--
query stack:
#0 [optimized_mir] optimizing MIR for `main`
#1 [analysis] running analysis passes on this crate
-->
| I-ICE,T-compiler,C-bug,S-has-mcve,S-bug-has-test,A-mir-opt-GVN | low | Critical |
2,554,123,177 | rust | Allow restricting search results to come from a subset of crates | When docs are built for a set of crates (e.g. <https://doc.rust-lang.org/nightly/nightly-rustc/>), it's easy to find yourself looking at a wide collection of crates.
AFAIK there's currently no way to restrict searches from a **subset** of crates. The dropdown for filtering crates is a single-choice dropdown. I run into this quite often, in that I usually have a vague idea of which compiler crates the thing I'm looking for is in, but I do not know *exactly* which compiler crate. It would be very helpful to me if it's possible to filter on multiple crates.
Apologies if this has been brought up before, I couldn't find a similar issue in the issue tracker based on naive searches. | T-rustdoc,C-feature-request,A-rustdoc-search | low | Minor |
2,554,164,088 | rust | On Windows `is_terminal` always returns `false` if the handle is not opened with read access | Minimal example:
```rust
use std::io::IsTerminal;
fn main() {
let conout = r"\\.\CONOUT$";
let stdout = std::fs::File::options().write(true).open(conout).unwrap();
assert!(stdout.is_terminal());
}
```
This is because we use [`GetConsoleMode`](https://learn.microsoft.com/en-us/windows/console/getconsolemode) to determine if a handle is a console handle or not and the docs for `GetConsoleMode` state:
> The handle must have the GENERIC_READ access right.
We could check the error of `GetConsoleMode` and, if it's `ERROR_ACCESS_DENIED`, then use `GetFileType` to see if it returns `FILE_TYPE_CHAR`. We can't use `GetFileType` alone because, for example, the `NUL` device claims to be a character device.
EDIT: Maybe simpler, we could check if `GetConsoleMode` errors with `ERROR_INVALID_HANDLE` though I'm not 100% sure it'll always return that for non-console handles.
Alternatively we could just close this issue and say this is not our problem. Most applications and runtimes don't do anything special here and simply treat it as a non-console handle. | O-windows,C-bug,T-libs,A-io | low | Critical |
2,554,171,685 | godot | Editor memory usage increases everytime project is started | ### Tested versions
- Reproducible: 4.3stable(steam)
- Not Reproducible: 4.2stable(steam), 4.3stable(website)
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4080 SUPER (NVIDIA; 32.0.15.6109) - AMD Ryzen 7 7800X3D 8-Core Processor (16 Threads)
### Issue description
Using Run Project(F5) over and over increases the ram usage of the editor by 100-200MB each time
### Steps to reproduce
Run Project(F5) over and over
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing,regression | low | Major |
2,554,177,860 | react-native | `ScrollView` `contentContainerStyle`'s `flexGrow` doesn't account for `contentInsetAdjustmentBehavior` or `automaticallyAdjustKeyboardInsets` | ### Description
This is a reopen of https://github.com/facebook/react-native/issues/25282 which is still reproducible in the latest React native version.
### Steps to reproduce
1. Create a `ScrollView` with `contentInsetAdjustmentBehavior="automatic"` and `contentContainerStyle={{flexGrow: 1}}`.
2. Observe that the content container grows larger than the ScrollView's height, while the height is expected to match, because safe area insets would be accounted for.
See the Expo snack
### React Native Version
0.75.3
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
N/A — easy to reproduce
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://snack.expo.dev/@louay/scrollview-layout-contentinset-issue
### Screenshots and Videos
<p float="left"><img src="https://user-images.githubusercontent.com/349850/59601225-5512d780-90fb-11e9-94fb-59ff24c9a24e.PNG" width=300/> <img src="https://user-images.githubusercontent.com/349850/59601229-56dc9b00-90fb-11e9-8ce3-4d3afa639ba4.PNG" width=300/></p> | Issue: Author Provided Repro,Component: ScrollView,API: Keyboard | low | Minor |
2,554,189,150 | flutter | Chinese multiple language 'Tibetan Standard' and 'Uighur' are not supported | ### Steps to reproduce


### this is my main.dart code
```
`import 'package:flutter/material.dart';
import 'package:flutter_localizations/flutter_localizations.dart';
import 'generated/l10n.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
localizationsDelegates: [
GlobalCupertinoLocalizations.delegate,
GlobalMaterialLocalizations.delegate,
GlobalWidgetsLocalizations.delegate,
S.delegate
],
//国际化
supportedLocales: S.delegate.supportedLocales,
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// TRY THIS: Try running your application with "flutter run". You'll see
// the application has a purple toolbar. Then, without quitting the app,
// try changing the seedColor in the colorScheme below to Colors.green
// and then invoke "hot reload" (save your changes or press the "hot
// reload" button in a Flutter-supported IDE, or press "r" if you used
// the command line to start the app).
//
// Notice that the counter didn't reset back to zero; the application
// state is not lost during the reload. To reset the state, use hot
// restart instead.
//
// This works for code too, not just values: Most code changes can be
// tested with just a hot reload.
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// TRY THIS: Try changing the color here to a specific color (to
// Colors.amber, perhaps?) and trigger a hot reload to see the AppBar
// change color while the other colors stay the same.
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
//
// TRY THIS: Invoke "debug painting" (choose the "Toggle Debug Paint"
// action in the IDE, or press "p" in the console), to see the
// wireframe for each widget.
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text(
S.current.chooseUpDownAddress,
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
`
```
### this is my demo code zip (389KB):
[lanauge_test.zip](https://github.com/user-attachments/files/17174291/lanauge_test.zip)
### Actual results
Chinese multiple language 'Tibetan Standard' and 'Uighur' are not supported,it have some error on consloe,
but english and zh work well,language code bo and ug have error in console

### Logs
_No response_
### Flutter Doctor output
```
yaochangliang@yaochangliang ~ % flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.19.3, on macOS 12.7.6 21H1320 darwin-x64, locale
zh-Hans-CN)
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
! Some Android licenses not accepted. To resolve this, run: flutter doctor
--android-licenses
[✓] Xcode - develop for iOS and macOS (Xcode 14.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.2)
[✓] IntelliJ IDEA Ultimate Edition (version 2023.2.5)
[✓] Connected device (3 available)
[✓] Network resources
! Doctor found issues in 1 category.
``` | framework,a: internationalization,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.26 | low | Critical |
2,554,189,172 | tensorflow | Segmentation fault (core dumped) in `tf.profiler.experimental.Profile` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0-dev20240925
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
3.9.13
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Under specific inputs, tf.profiler.experimental.Profile triggered a crash.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
profiler_options = tf.profiler.experimental.ProfilerOptions(
host_tracer_level=999,
python_tracer_level=-1,
device_tracer_level=10,
delay_ms=None
)
with tf.profiler.experimental.Profile(None, options=profiler_options):
a = tf.constant(1)
b = tf.constant(2)
c = a + b
print(c.numpy())
```
### Relevant log output
```shell
2024-09-28 20:07:36.902909: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.2024-09-28 20:07:36.966049: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-28 20:07:36.998027: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-28 20:07:37.002984: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered2024-09-28 20:07:37.055864: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI AMX_TILE AMX_INT8 AMX_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Segmentation fault (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,554,195,074 | rust | Tracking Issue for `proc_macro::ToTokens` | Feature gate: `#![feature(proc_macro_totokens)]`
This is a tracking issue for adding a `ToTokens` trait in `proc_macro`, which can then be used in `proc_macro::quote!`. See [the ACP](https://github.com/rust-lang/libs-team/issues/431) for motivation.
### Public API
This will be similar to [`quote::ToTokens`](https://docs.rs/quote/latest/quote/trait.ToTokens.html). That can be used as a reference for implementation details since it already provides all of these.
```rust
// proc_macro
pub trait ToTokens {
fn to_tokens(&self, tokens: &mut TokenStream);
fn to_token_stream(&self) -> TokenStream { ... }
fn into_token_stream(self) -> TokenStream
where Self: Sized { ... }
}
// Aggregate token types
impl ToTokens for TokenTree { /* ... */ }
impl ToTokens for TokenStream { /* ... */ }
// Single token types
impl ToTokens for Literal { /* ... */ }
impl ToTokens for Ident { /* ... */ }
impl ToTokens for Punct { /* ... */ }
impl ToTokens for Group { /* ... */ }
// Indirect types
impl<T: ToTokens + ?Sized> ToTokens for &T { /* ... */ }
impl<T: ToTokens + ?Sized> ToTokens for &mut T { /* ... */ }
impl<T: ToTokens + ?Sized> ToTokens for Box<T> { /* ... */ }
impl<T: ToTokens + ?Sized> ToTokens for Rc<T> { /* ... */ }
impl<T: ToTokens> ToTokens for Option<T> { /* ... */ }
impl<T: ToTokens + ToOwned + ?Sized> ToTokens for Cow<T> { /* ... */ }
// Types that can create `Literal`s
impl ToTokens for {u,i}{8,16,32,64,128} { /* ... */ }
impl ToTokens for f{16,32,64,128} { /* ... */ }
impl ToTokens for bool { /* ... */ }
impl ToTokens for char { /* ... */ }
impl ToTokens for str { /* ... */ }
impl ToTokens for String { /* ... */ }
impl ToTokens for CStr { /* ... */ }
impl ToTokens for CString { /* ... */ }
/* migrate the following APIs, if possible without breakage */
// currently `Extend<TokenStream>` and `Extend<TokenTree>`
impl Extend<T: ToTokens> for TokenStream { /* ... */ }
// currently `FromIterator<TokenStream>` and `FromIterator<TokenTree>`
impl FromIterator<T: ToTokens> for TokenStream { /* ... */ }
```
### Steps / History
- [x] ACP: https://github.com/rust-lang/libs-team/issues/431
- [x] Implementation in `proc_macro`: https://github.com/rust-lang/rust/pull/131441
- [ ] Update `proc_macro::quote!` to use these traits: #...
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
- What should this be named? `ToTokens` doesn't seem quite accurate, but I don't know what would be better (`ToTokenStream`? `ExtendTokenStream`? Those seem a bit clunky).
- Considering `impl<T: ToTokens> ToTokens` for T is provided, should to_tokens take self by value rather than by reference so cloning isn't always necessary? (fn to_tokens(self, tokens: &mut TokenStream))
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"SpriteOvO"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-macros,T-libs-api,C-tracking-issue,A-proc-macros,WG-macros | low | Major |
2,554,195,605 | tensorflow | Aborted (core dumped) in `tf.nn.max_pool/tf.nn.max_pool1d` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.17
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
3.9.13
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Under specific inputs, tf.nn.max_pool triggered a crash.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
invalid_kernel_size = -1
invalid_operation = tf.nn.max_pool(
tf.random.normal([1, 32, 32, 3]),
ksize=[1, invalid_kernel_size, invalid_kernel_size, 1],
strides=[1, 2, 2, 1],
padding='SAME'
)
```
```
import tensorflow as tf
import sys
ksize = sys.maxsize + 100 # Set to a value larger than sys.maxsize
input_tensor = tf.random.normal(shape=(2, 10, 4))
result = tf.nn.max_pool1d(input=input_tensor, ksize=ksize, strides=1, padding='SAME')
```
### Relevant log output
```shell
2024-09-28 20:26:47.491907: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-28 20:26:47.554171: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-28 20:26:47.606570: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-28 20:26:47.610539: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-28 20:26:47.639739: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI AMX_TILE AMX_INT8 AMX_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-28 20:26:54.579839: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 21471 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:1f:00.0, compute capability: 8.9
2024-09-28 20:26:54.582099: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 1724 MB memory: -> device: 1, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:d4:00.0, compute capability: 8.9
2024-09-28 20:26:55.563477: I external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:531] Loaded cuDNN version 8907
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
F0000 00:00:1727526415.563805 147227 cuda_dnn.cc:1107] Check failed: cudnnSetPoolingNdDescriptor( handle_.get(), (pooling_descriptor.mode() == dnn::PoolingMode::kMaximum ? cudnn_max_pooling_mode : CUDNN_POOLING_AVERAGE_COUNT_EXCLUDE_PADDING), propagate_nans ? CUDNN_PROPAGATE_NAN : CUDNN_NOT_PROPAGATE_NAN, nd, shape.data(), padding.data(), strides.data()) == CUDNN_STATUS_SUCCESS (3 vs. 0)
*** Check failure stack trace: ***
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,554,195,752 | kubernetes | [Flaking Test] [sig-network] Conformance-EC2-arm64-master - Service is not reachable within 2m0s timeout | ### Which jobs are flaking?
- master-informing
[Conformance-EC2-arm64-master](https://testgrid.k8s.io/sig-release-master-informing#Conformance%20-%20EC2%20-%20arm64%20-%20master)
### Which tests are flaking?
- Kubernetes e2e suite.[It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance], [Prow](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-ec2-arm64-conformance-latest/1839326641159933952), [Triage](https://storage.googleapis.com/k8s-triage/index.html?test=Services%20should%20be%20able%20to%20change%20the%20type%20from%20ExternalName%20to%20NodePort)
- Kubernetes e2e suite.[It] [sig-network] Services should be able to create a functioning NodePort service [Conformance], [Triage](https://storage.googleapis.com/k8s-triage/index.html?test=Services%20should%20be%20able%20to%20create%20a%20functioning%20NodePort%20service)
- Kubernetes e2e suite.[It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance], [Triage](https://storage.googleapis.com/k8s-triage/index.html?test=Services%20should%20be%20able%20to%20switch%20session%20affinity%20for%20NodePort%20service)
- Kubernetes e2e suite.[It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance], [Triage](https://storage.googleapis.com/k8s-triage/index.html?test=Services%20should%20have%20session%20affinity%20work%20for%20NodePort%20service)
### Since when has it been flaking?
These tests failed multiple times since September. View Triage Links above for more context.
### Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#Conformance%20-%20EC2%20-%20arm64%20-%20master
### Reason for failure (if possible)
Same error messages, only different on endpoint
```
{ failed [FAILED] service is not reachable within 2m0s timeout on endpoint 172.31.6.192:30233 over TCP protocol
In [It] at: k8s.io/kubernetes/test/e2e/network/service.go:1278 @ 09/26/24 16:31:40.542
}
```
### Anything else we need to know?
N/A
### Relevant SIG(s)
/sig network
@kubernetes/release-team-release-signal | sig/network,kind/flake,priority/important-longterm,triage/accepted | low | Critical |
2,554,197,760 | godot | Successive calls to `CanvasItem.draw_primitive()` disregard the texture | ### Tested versions
Master as of writing: v4.4.dev.custom_build [76a135926]
See 'Issue description' for details.
### System information
Godot v4.4.dev2 - Windows 10.0.22631 - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 4090 (NVIDIA; 31.0.15.3699) - AMD Ryzen 9 7950X 16-Core Processor (32 Threads)
### Issue description
After merging #92797 (bisected) successive calls to `CanvasItem.draw_primitive()` and `RenderingServer.canvas_item_add_primitive()` in the RD-renderers keep using the texture of the first call. Adding other draw commands between the calls fixes the issue as it disables batching. The same issue is and has been present in the Compatibility renderer.
### Steps to reproduce
Draw primitives with the same point count (otherwise they're not batched) with different textures in succession, and observe that the second call uses the texture of the first.
```gdscript
draw_primitive(points, colors, uvs, texture_a)
draw_primitive(points, colors, uvs, texture_b)
```
### Minimal reproduction project (MRP)
[drawcalltest.zip](https://github.com/user-attachments/files/17174396/drawcalltest.zip) | bug,confirmed,regression,topic:gui | low | Minor |
2,554,200,440 | rust | TAIT coherence checks don't ensure composability of crates | The coherence checks for trait implementations where the receiver type is an opaque type (defined with TAIT) are designed and/or implemented in a way that doesn’t uphold the principle of seamless composability of crates. It also doesn’t uphold current principles of what kind of trait implementation constitutes a breaking change.
---
Here’s a reproducing example (consisting of 3 crates):
crate A
```toml
[package]
name = "a"
edition = "2021"
```
```rs
pub trait MyFrom<T> {
fn from(value: T) -> Self;
}
pub trait AsFoo {}
pub struct Foo;
impl AsFoo for Foo {}
```
---
crate B
```toml
[package]
name = "b"
edition = "2021"
[dependencies]
a = { path = "../a" }
```
```rs
use a::{AsFoo, MyFrom};
pub struct Wrapper<T>(pub T);
impl<T> MyFrom<T> for Wrapper<T> {
fn from(value: T) -> Self {
Wrapper(value)
}
}
impl<T> AsFoo for Wrapper<T> {}
```
---
crate C
```toml
[package]
name = "c"
edition = "2021"
[dependencies]
a = { path = "../a" }
b = { path = "../b" }
```
```rs
#![feature(type_alias_impl_trait)]
use a::{AsFoo, Foo, MyFrom};
// use b; // <- uncomment for error
type Alias = impl AsFoo;
struct Local;
impl MyFrom<Local> for Alias {
fn from(_: Local) -> Alias {
Foo
}
}
```
---
The above example involving 3 crates A, B, C; A is a dependency of B and C.
C compiles successfully with just A as a dependency.
If B is added as a dependency of C (and actually used, by uncommenting the `use b;`) then the following error appears:
```
error[E0119]: conflicting implementations of trait `MyFrom<Local>` for type `Wrapper<Local>`
--> src/lib.rs:10:1
|
10 | impl MyFrom<Local> for Alias {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: conflicting implementation in crate `b`:
- impl<T> MyFrom<T> for Wrapper<T>;
```
The error also does make sense: If we further change crate C, so that the defining use of `Alias` produces not a `Foo` but a `b::Wrapper<Local>`, then the `impl`s of `MyFrom` will become _actually_ overlapping, even when the opaque type is treated transparently. As far as I can tell, it seems that the goal of the error was to be conservative and ensure that changing the actual concrete choice of type behind the opaque TAIT-type should not introduce any new overlap errors later.
As a consequence, in my opinion this means that *without* the crate B, there should probably *also* be some kind of error here.
Some more thoughts and observations:
* the problematic `impl<T> MyFrom<T> for Wrapper<T>` is a blanket `impl`. Addition of such an impl is actually considered a breaking change in other kinds of situations
* when such an `impl` is added for a type `Wrapper<T>` that *already exists* in previous versions, that’s considered technically breaking
* however, if it’s introduced *together* with the type `Wrapper<T>`, that it okay; and it’s exactly this combination that “add trait B as dependency” achieves
* the problematic `impl<T> MyFrom<T> for Wrapper<T>` produces the same error if crates A and B are combined into one. I split them up, because it makes an even stronger argument; nonetheless, common “what’s considered breaking” standards imply that crate A should of course also be allowed to simply add such a pair of struct `Wrapper<T>` + this `impl` of `MyFrom`, and this issue violates that principle, too.
* I first noticed this issue in [this code example](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=a48797abb66b0b9074d20059da1e8b97), where some trait impls involving local traits and types, as well as `From` and `AsRef<str>` results in an error mention that surprisingly mentions random stuff such as `gimli::common::DebugFrameOffset` [presumably because it’s the first type the compiler finds that has some kind of `From<T> for Self<T>`-style implementation]. If nothing else that’s a surprising diagnostic, but unsurprisingly is just highlighted this more abstract&general issue with coherence-checks.
* I have not tried reasoning more deeply about what kind of “orphan rules” a TAIT needs to fulfill to avoid this issue completely, but I wouldn’t be surprised if there *are* some straightforward ways of giving TAITs a more restrictive version of the orphan rules, which solve this issue.
@rustbot label +F-type_alias_impl_trait +A-coherence +A-traits +T-types | A-trait-system,C-bug,F-type_alias_impl_trait,requires-nightly,T-types,A-coherence,F-impl_trait_in_assoc_type | low | Critical |
2,554,200,616 | tensorflow | Floating point exception (core dumped) in `tf.nn.depth_to_space` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0-dev20240925
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
3.9.13
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Under specific inputs, tf.nn.depth_to_space triggered a crash.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
try:
# Create an empty tensor
arg_0_tensor = tf.zeros([0, 2, 3, 12], dtype=tf.float32)
# arg_0 = tf.identity(arg_0_tensor)
arg_1 = 536870912
out = tf.nn.depth_to_space(arg_0_tensor, arg_1)
except Exception as e:
print("Error:", str(e))
```
### Relevant log output
```shell
2024-09-28 20:41:05.888017: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-28 20:41:05.950498: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-28 20:41:06.028236: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-28 20:41:06.052072: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-28 20:41:06.111011: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI AMX_TILE AMX_INT8 AMX_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-28 20:41:11.970896: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 2704 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:1f:00.0, compute capability: 8.9
2024-09-28 20:41:11.973176: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 1724 MB memory: -> device: 1, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:d4:00.0, compute capability: 8.9
Floating point exception (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,554,203,474 | tensorflow | Aborted (core dumped) in `tf.io.encode_png`/`tf.compat.v1.image.encode_png` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0-dev20240925
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
3.9.13
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The crash was triggered when an illegal image was passed to tf.io.encode_png/tf.compat.v1.image.encode_png
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
image = tf.cast(tf.tile([[[0, 0, 0, 1]], [[0, 0, 1, 0]]], [0, 0, 1]), tf.uint8)
encoded_image = tf.compat.v1.image.encode_png(image) # crash
tf.io.encode_png(image, compression=-1, name=None) #crash
```
### Relevant log output
```shell
2024-09-28 20:48:36.270008: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-28 20:48:36.332972: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-28 20:48:36.411391: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-28 20:48:36.428306: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-28 20:48:36.438336: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI AMX_TILE AMX_INT8 AMX_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.2024-09-28 20:48:41.296886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3114 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:1f:00.0, compute capability: 8.92024-09-28 20:48:41.297450: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 1724 MB memory: -> device: 1, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:d4:00.0, compute capability: 8.9
2024-09-28 20:48:41.475588: F tensorflow/core/lib/png/png_io.cc:350] 'image' Must be non NULL
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,554,210,437 | PowerToys | Plz Add CapsLock shortkey!!!make capsLock usful during my coding | ### Provide a description of requested docs changes
like:capsLock + S = left。capsLock+F = right。 | Issue-Docs,Needs-Triage | low | Minor |
2,554,212,734 | tensorflow | Aborted (core dumped) in `tf.raw_ops.ResourceScatterNdop` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0-dev20240925
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
3.9.13
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When the type of resource_handle is inconsistent with that of updates,tf.raw_ops.ResourceScatterNdop triggers the crash. As follows:
tf.raw_ops.ResourceScatterNdUpdate
tf.raw_ops.ResourceScatterNdAdd
tf.raw_ops.ResourceScatterNdSub
tf.raw_ops.ResourceScatterNdMax
tf.raw_ops.ResourceScatterNdMin
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
import numpy as np
resource_var = tf.Variable(initial_value=tf.zeros([2, 2], dtype=tf.int32), trainable=False)
resource_handle = resource_var.handle
indices = np.array([[2, 1], [1, 2]], dtype=np.int32)
updates = np.array([10, 20], dtype=np.float32)
tf.raw_ops.ResourceScatterNdUpdate( # crash
ref=resource_handle,
indices=indices,
updates=updates,
use_locking=True
)
tf.raw_ops.ResourceScatterNdAdd( # crash
ref=resource_handle,
indices=indices,
updates=updates,
use_locking=True
)
tf.raw_ops.ResourceScatterNdSub( # crash
ref=resource_handle,
indices=indices,
updates=updates,
use_locking=True
)
tf.raw_ops.ResourceScatterNdMax( # crash
ref=resource_handle,
indices=indices,
updates=updates,
use_locking=True
)
tf.raw_ops.ResourceScatterNdMin( # crash
ref=resource_handle,
indices=indices,
updates=updates,
use_locking=True
)
```
### Relevant log output
```shell
2024-09-28 21:06:23.445185: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-28 21:06:23.508056: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-28 21:06:23.583640: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-28 21:06:23.607538: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-28 21:06:23.664877: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI AMX_TILE AMX_INT8 AMX_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-28 21:06:31.527466: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3114 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:1f:00.0, compute capability: 8.9
2024-09-28 21:06:31.527985: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 1724 MB memory: -> device: 1, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:d4:00.0, compute capability: 8.9
2024-09-28 21:06:31.782114: F tensorflow/core/framework/tensor.cc:844] Check failed: dtype() == expected_dtype (3 vs. 1) float expected, got int32
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,554,213,124 | next.js | Template strings are incorrectly converted during building, lead to JSON.parse errors at production runtime. | ### Link to the code that reproduces this issue
https://github.com/Innei/next-bundle-regexp-repro
### To Reproduce
1. clone my repro
2. run build
3. next start and see prod page

### Current vs. Expected behavior
The product of the build has changed in structure from the third party's original file, specifically, the template string of the third party's original file has been converted incorrectly, resulting in a JSON parse error.
1. The third party original code:
(It's too long. I've cut off part of it.

I run the code, works

2. next The compiled product, the template string is converted to a normal string, and throw an error.

And copy the JSON.parse part code and run in console. Error.

And in the actual run, the reported error is the same

I was expecting that. next.js should not be converting template strings. Not only is the conversion incorrect here, but it also doesn't respect browserlist, even though I've set the browserlist target to the last 1 chrome version.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Mon Aug 12 20:52:12 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T6020
Available memory (MB): 32768
Available CPU cores: 12
Binaries:
Node: 22.3.0
npm: 10.2.4
Yarn: 1.22.21
pnpm: 9.11.0
Relevant Packages:
next: 14.2.13 // Latest available version is detected (14.2.13).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
SWC
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
_No response_ | bug,SWC | low | Critical |
2,554,215,305 | tensorflow | Aborted (core dumped) in `tf.linalg.det/slogdet/logdet/cholesky/inv` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0-dev20240925
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
3.9.13
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
tf.linalg.det/slogdet/logdet/cholesky/inv triggered a crash when the input is empty. Note that this will only be triggered if the gpu is available.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
invalid_input = tf.zeros([])
tf.linalg.det(invalid_input) # crash
tf.linalg.slogdet(invalid_input) # crash
tf.linalg.cholesky(invalid_input) # crash
tf.linalg.logdet(invalid_input) # crash
tf.linalg.inv(invalid_input) # crash
```
### Relevant log output
```shell
2024-09-28 21:11:10.188752: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-28 21:11:10.199880: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-28 21:11:10.213635: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-28 21:11:10.221654: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-28 21:11:10.279720: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI AMX_TILE AMX_INT8 AMX_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-28 21:11:17.015480: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3114 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:1f:00.0, compute capability: 8.9
2024-09-28 21:11:17.015957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 1724 MB memory: -> device: 1, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:d4:00.0, compute capability: 8.9
2024-09-28 21:11:17.154391: F tensorflow/core/framework/tensor_shape.cc:356] Check failed: d >= 0 (0 vs. -1)
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,554,218,867 | tensorflow | Segmentation fault (core dumped) in `tf.data.experimental.SqlDataset` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0-dev20240925
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
3.9.13
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When the illegal input to tf.data.experimental.SqlDataset triggered when a crash, and will only come when iteration data.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
data_source_name = "sqlite:///path/to/correct_database.db"
query = "SELECT id, name FROM my_table"
output_types = (tf.int64, tf.string)
dataset = tf.data.experimental.SqlDataset(
'sqlite', data_source_name, query, output_types)
for element in dataset:
print(element)
```
### Relevant log output
```shell
2024-09-28 21:18:33.844482: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-28 21:18:33.907260: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-28 21:18:33.986019: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-28 21:18:34.009755: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-28 21:18:34.068897: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI AMX_TILE AMX_INT8 AMX_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-28 21:18:38.768599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3114 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:1f:00.0, compute capability: 8.9
2024-09-28 21:18:38.769172: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 1724 MB memory: -> device: 1, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:d4:00.0, compute capability: 8.9
2024-09-28 21:18:39.132534: W tensorflow/core/kernels/data/experimental/sql_dataset_op.cc:209] Failed to connect to database: INVALID_ARGUMENT: Sqlite::Open(sqlite:///path/to/correct_database.db) failed: unable to open database file
Segmentation fault (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,554,263,824 | angular | Add an alternative NoInferFormBuilder | ### Which @angular/* package(s) are relevant/related to the feature request?
forms
### Description
Currently form builder supports multiple form creation options and due to that and typescript issues some patterns are really problematic to use. For example:
```ts
// Infers string - fails
formBuilder.group<{a: FormGroup<{a: FormControl<'a' | 'b'>}>}>({a: formBuilder.group({a: formBuilder.control('a')})});
// Fails to infer array type
formBuilder.group<{a: FormArray<FormControl<boolean>>}>({a: formBuilder.array([])});
// Incorrect auto complete here (in ...) - k has unknown type
formBuilder.group<{g: FormGroup<{k: FormGroup<{a: FormControl<number>}>}>}>({g: formBuilder.group({...})});
```
### Proposed solution
Create a new form builder variation which is oriented to cases when you provide the form type yourself. The new form builder will use `NoInfer` for parameter so the inference will be strictly from the return type which will solve the problems above.
An example implementation that will solve all the issues above:
```ts
export class NoInferFormBuilder {
control<T>(
formState: NoInfer<T | FormControlState<T>>,
validatorOrOpts?:
| ValidatorFn
| ValidatorFn[]
| AbstractControlOptions
| null,
asyncValidator?: AsyncValidatorFn | AsyncValidatorFn[] | null
) {
return new FormControl<T>(formState, validatorOrOpts, asyncValidator);
}
group<
T extends {
[K in keyof T]: AbstractControl<any>;
}
>(
controls: NoInfer<T>,
validatorOrOpts?:
| ValidatorFn
| ValidatorFn[]
| AbstractControlOptions
| null,
asyncValidator?: AsyncValidatorFn | AsyncValidatorFn[] | null
) {
return new FormGroup<T>(controls, validatorOrOpts, asyncValidator);
}
array<T extends AbstractControl<any>>(
controls: NoInfer<T[]>,
validatorOrOpts?:
| ValidatorFn
| ValidatorFn[]
| AbstractControlOptions
| null,
asyncValidator?: AsyncValidatorFn | AsyncValidatorFn[] | null
) {
return new FormArray<T>(controls, validatorOrOpts, asyncValidator);
}
}
```
### Alternatives considered
Get around those problems which can be annoying and repetitive or use form classes which helps to some extent or use untyped forms. | area: forms | low | Minor |
2,554,266,794 | flutter | Unwanted animation appears when only providing PageStorageKey for one of two ListView | ### Steps to reproduce
Run the attached code sample
### Expected results
Chips appear without animation, as they do when there's no PageStorageKey specified
### Actual results
Chips appear with a strange unwanted slide animation
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter issue',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const HomePageView(),
);
}
}
class HomePageView extends StatefulWidget {
const HomePageView({super.key});
@override
State<HomePageView> createState() => _HomePageViewState();
}
class _HomePageViewState extends State<HomePageView> {
int _currentPageIndex = 0;
late final _pages = [
const Page1View(),
const Page2View(),
const Page3View(),
const Page4View(),
];
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Page storage')),
body: _buildPage(),
bottomNavigationBar: _buildBottomNavigationBar(),
);
}
Widget _buildPage() {
return PageStorage(
bucket: PageStorageBucket(),
child: _pages.elementAt(_currentPageIndex),
);
}
Widget _buildBottomNavigationBar() {
return NavigationBar(
destinations: const [
NavigationDestination(
icon: Icon(Icons.dashboard),
label: 'Dashboard',
),
NavigationDestination(
icon: Icon(Icons.keyboard),
label: 'Write',
),
NavigationDestination(
icon: Icon(Icons.format_list_numbered),
label: 'Organize',
),
NavigationDestination(
icon: Icon(Icons.timer),
label: 'Schedule',
),
],
selectedIndex: _currentPageIndex,
onDestinationSelected: (index) async {
setState(() {
_currentPageIndex = index;
});
},
);
}
}
class Page1View extends StatefulWidget {
const Page1View({super.key});
@override
State<Page1View> createState() => _Page1ViewState();
}
class _Page1ViewState extends State<Page1View> {
@override
Widget build(BuildContext context) {
return _buildList();
}
Widget _buildList() {
return ListView(
key: const PageStorageKey('page1'),
children: List.generate(40, (index) {
return Card(
child: Padding(
padding: const EdgeInsets.all(16),
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
SizedBox(
height: 40,
child: ListView(
scrollDirection: Axis.horizontal,
children: List.generate(10, (index) {
return Chip(
label: Text(
'Chip $index',
style: const TextStyle(color: Colors.green),
),
shape: const StadiumBorder(
side: BorderSide(color: Colors.green),
),
);
}),
),
),
const SizedBox(height: 10),
ListTile(
title: Text('Item $index'),
),
],
),
),
);
}),
);
}
}
class Page2View extends StatefulWidget {
const Page2View({super.key});
@override
State<Page2View> createState() => _Page2ViewState();
}
class _Page2ViewState extends State<Page2View> {
@override
Widget build(BuildContext context) {
return _buildList();
}
Widget _buildList() {
return ListView(
children: List.generate(40, (index) {
return Card(
child: Padding(
padding: const EdgeInsets.all(16),
child: Text('Item $index'),
),
);
}),
);
}
}
class Page3View extends StatefulWidget {
const Page3View({super.key});
@override
State<Page3View> createState() => _Page3ViewState();
}
class _Page3ViewState extends State<Page3View> {
@override
Widget build(BuildContext context) {
return _buildList();
}
Widget _buildList() {
return ListView(
children: List.generate(40, (index) {
return Card(
child: Padding(
padding: const EdgeInsets.all(16),
child: Text('Item $index'),
),
);
}),
);
}
}
class Page4View extends StatefulWidget {
const Page4View({super.key});
@override
State<Page4View> createState() => _Page4ViewState();
}
class _Page4ViewState extends State<Page4View> {
@override
Widget build(BuildContext context) {
return _buildList();
}
Widget _buildList() {
return ListView(
children: List.generate(40, (index) {
return Card(
child: Padding(
padding: const EdgeInsets.all(16),
child: Text('Item $index'),
),
);
}),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/9b7e91fb-e704-42b9-8365-ea923d6925ad
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Versione 10.0.19045.4894], locale it-IT)
• Flutter version 3.24.3 on channel stable at C:\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (2 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\Alessandro\AppData\Local\Android\sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[!] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.11.4)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.11.35312.102
X Visual Studio is missing necessary components. Please re-run the Visual Studio installer for the "Desktop development with C++" workload, and include these components:
MSVC v142 - VS 2019 C++ x64/x86 build tools
- If there are multiple build tool versions available, install the latest
C++ CMake tools for Windows
Windows 10 SDK
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• android-studio-dir = C:\Program Files\Android\Android Studio
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] VS Code (version 1.71.2)
• VS Code at C:\Users\Alessandro\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.48.0
[√] Connected device (4 available)
• Pixel Fold (mobile) • 35181FDHS00206 • android-arm64 • Android 14 (API 34)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Versione 10.0.19045.4894]
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.70
• Edge (web) • edge • web-javascript • Microsoft Edge 128.0.2739.42
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| framework,f: scrolling,has reproducible steps,P2,workaround available,team-framework,triaged-framework,found in release: 3.24,found in release: 3.26 | low | Major |
2,554,306,953 | rust | Lint on FileCheck directives with missing colon | In current compiletest -> FileCheck setup (e.g. codegen/assembly tests), if you forgor a colon `:` in one of the FileCheck directives, it gets silently ignored by both compilest and FileCheck, meaning that you aren't testing anything.
```rs
// tests/assembly/selftest.rs
//@ assembly-output: emit-asm
#![crate_type = "lib"]
// CHECK-LABEL: foo:
//-- CHECK-LABEL: bar: # <- make sure this test is actually testing
// CHECK-NOT ret # <- notice the missing `:`?
#[no_mangle]
pub fn foo(x: u8) {}
```
This test currently will just silently pass, even though we wanted to check `ret` doesn't exist (made-up example for illustration purposes). Realized while reviewing https://github.com/rust-lang/rust/pull/128018.
Example missing colon (real example): https://github.com/rust-lang/rust/pull/125626.
We should probably aim to lint on cases like this. | A-testsuite,C-enhancement,T-bootstrap,A-compiletest | low | Minor |
2,554,321,314 | godot | GPUParticles3D linear acceleration doesn't respect velocity direction and causes particles to jitter. | ### Tested versions
4.3
### System information
Windows 11 - Vulkan - Nvidia RTX 4070 - intel i5 13600KF
### Issue description
Firstly, I don't know if this is a wrong behavior or not so I'm sorry in case I misunderstand how this should work.
1. If you don't set the initial velocity and only use a (!) positive (!) linear acceleration the particles always accelerate down no matter what velocity direction is set, since the acceleration doesn't have direction maybe it should use the velocity direction? (red particles in the MRP)
2. If you don't set the initial velocity and only use a (!) negative (!) linear acceleration the particles jitter (yellow particles in the MRP) - and sometimes go in a different directions too, in case of the MRP the yellow particles sometimes have some which goes down and not up as most of them. Jitter gets even worse if you do set a velocity to a non-zero value such as 1 for min + max.
**EDIT: Actually even if you have velocity set to a positive non-zero value the negative linear acceleration causes the particles to spasm as in the video.**
Note: Green particles in the MRP have a non-zero velocity (positive) to show how it normally works.
You can see everything in the video below:
https://github.com/user-attachments/assets/4d700e31-82a9-4b60-aeed-000c54efa02d
### Steps to reproduce
Use GPUParticles3D, set gravity and animated velocity to 0, set linear acceleration to a positive value, set velocity direction in different direction and observe no changes.
Do the above but this time set the linear acceleration to a negative value and observe the particles' jittered movement.
### Minimal reproduction project (MRP)
[LinearAccelerationTest.zip](https://github.com/user-attachments/files/17174823/LinearAccelerationTest.zip)
| bug,topic:particles | low | Minor |
2,554,329,987 | go | proposal: runtime/pprof: add data-type profiling | ### Proposal Details
## Proposal Details
With field reordering and padding of structs, static analysis can help to improve memory layouts of Go structs. This can lead to a more efficient way to access struct fields, as the fields within the struct are aligned to some degree. Combined with dead code analysis, unused fields in structs can be identified by static analysis and help to reduce the size of structs.
This proposal tries to introduce the ideas from [Data-type profiling for perf](https://lwn.net/Articles/955709/) to Go's pprof ecosystem to provide a Go native approach. Today it is already possible with [perf](https://www.man7.org/linux/man-pages/man1/perf.1.html) on Unix systems to do data-type profiling, reorder structs accordingly and benefit from the performance improvements.
Introduce a new [runtime/pprof Profile](https://pkg.go.dev/runtime/pprof#Profile) that tracks the number read/write accesses of fields within a Go struct.
The report of this new [runtime/pprof Profile](https://pkg.go.dev/runtime/pprof#Profile) should enable users to identify often used fields within a struct, in order to reorder struct fields to improve memory efficiency of their application.
Example reporting of for a Go struct generated by the approach described in [Data-type profiling for perf](https://lwn.net/Articles/955709/):
```
Annotate type: 'struct runtime.mspan' (654 samples)
Percent Offset Size Field
100.00 0 160 struct runtime.mspan {
0.00 0 0 internal/runtime/sys.NotInHeap _ {
0.00 0 0 internal/runtime/sys.nih _;
};
1.05 0 8 runtime.mspan* next;
0.00 8 8 runtime.mspan* prev;
0.23 16 8 runtime.mSpanList* list;
41.18 24 8 uintptr startAddr;
2.30 32 8 uintptr npages;
0.19 40 8 runtime.gclinkptr manualFreeList;
1.74 48 2 uint16 freeindex;
1.57 50 2 uint16 nelems;
0.23 52 2 uint16 freeIndexForScan;
1.82 56 8 uint64 allocCache;
1.56 64 8 runtime.gcBits* allocBits;
5.51 72 8 runtime.gcBits* gcmarkBits;
0.42 80 8 runtime.gcBits* pinnerBits;
1.54 88 4 uint32 sweepgen;
4.58 92 4 uint32 divMul;
2.70 96 2 uint16 allocCount;
12.49 98 1 runtime.spanClass spanclass;
0.00 99 1 runtime.mSpanStateBox state {
0.00 99 1 internal/runtime/atomic.Uint8 s {
0.00 99 0 internal/runtime/atomic.noCopy noCopy;
0.00 99 1 uint8 value;
};
};
1.69 100 1 uint8 needzero;
0.11 101 1 bool isUserArenaChunk;
0.23 102 2 uint16 allocCountBeforeCache;
18.64 104 8 uintptr elemsize;
0.00 112 8 uintptr limit;
0.00 120 8 runtime.mutex speciallock {
0.00 120 0 runtime.lockRankStruct lockRankStruct;
0.00 120 8 uintptr key;
};
0.22 128 8 runtime.special* specials;
0.00 136 16 runtime.addrRange userArenaChunkFree {
0.00 136 8 runtime.offAddr base {
0.00 136 8 uintptr a;
};
0.00 144 8 runtime.offAddr limit {
0.00 144 8 uintptr a;
};
};
0.00 152 8 internal/abi.Type* largeType;
};
```
The above shown example reports the field access of the Go internal struct [mspan](https://github.com/golang/go/blob/eb6f2c24cd17c0ca1df7e343f8d9187eef7d6e13/src/runtime/mheap.go#L395) while running the benchmarks in [net/http](https://pkg.go.dev/net/http) with `go version devel go1.24-eb6f2c24cd Sat Sep 28 01:07:09 2024 +0000 linux/amd64`.
## Alternative
Instead of introducing a new [runtime/pprof Profile](https://pkg.go.dev/runtime/pprof#Profile), a similar approach to [go build -cover](https://go.dev/doc/build-cover) could be used. During build time access to fields in Go structs could be instrumented and a report should be generated when executing the resulting Go binary. The resulting report then can be used by `go tool cover` to report the number of times a field in a struct was accessed.
## Question
I'm lacking Go runtime internal knowledge to provide a proof of concept with this proposal.
- Should runtime internal Go structs be exposed as well with data type profiling?
- Should the profiling of Go structs differentiate between publicly exposed fields and non-public internal fields?
- Is it possible and safe to turn on/off data-type profiling during runtime?
- Should the profile collect samples of field access, similar to the [perf approach](https://lwn.net/Articles/955709/), or count and report exact numbers | Proposal | low | Major |
2,554,331,366 | node | Streams: finished Change in behavoir between 22.2.0 and 22.9.0. Throws Exception (Unhandled Rejection) | ### Version
22.9.0 (Docker Latest)
### Platform
```text
Linux (Docker Latest)
root@0cd652c8fcb1:/# uname -a
Linux 0cd652c8fcb1 5.15.0-84-generic #93-Ubuntu SMP Tue Sep 5 17:16:10 UTC 2023 x86_64 GNU/Linux
```
### Subsystem
STREAMS/PROMISES FINISHED
### What steps will reproduce the bug?
Run the following code on 22.2 and 22.9 and note changed behavior - Unhandled Rejection
```
const { PassThrough} = require('stream');
const { pipeline, finished, } = require('stream/promises');
const fs = require('fs');
class MyTransform extends PassThrough {
constructor(is) {
super()
this.is = is
this.counter = 0
}
async _transform(data,enc,callback) {
this.counter++
this.push(data)
if (this.counter > 100) {
this.is.close()
}
callback()
}
}
async function runPipeline() {
const is = fs.createReadStream('input.txt')
is.on('error',(err) => {
console.log(is.constructor.name,err)
})
const t = new MyTransform(is)
t.on('error',(err) => {
console.log(t.constructor.name,err)
})
const os = fs.createWriteStream('output.txt')
os.on('error',(err) => {
console.log(os.constructor.name,err)
})
const streams = [is,t,os]
const activeStreams = streams.map((s) => {
return finished(s)
})
console.log(activeStreams)
try {
await pipeline(...streams);
console.log(t.counter)
} catch (err) {
console.log(1)
console.log(activeStreams)
await Promise.allSettled(activeStreams)
console.log(2)
console.log(activeStreams)
console.error('Pipeline error:', err);
}
}
process.on('unhandledRejection', (e,p) => {
console.log("Unhandled",e,p)
})
runPipeline().then(() => {console.log('success')}).catch((e) => {console.log(e)})
```
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior? Why is that the expected behavior?
The 22.2.0 Behavoir appears to be correct to me
```
C:\Development\YADAMU\src\scratch\streams>docker cp input.txt NODE-22-2:/tmp
C:\Development\YADAMU\src\scratch\streams>docker cp test1.js NODE-22-2:/tmp
C:\Development\YADAMU\src\scratch\streams>docker exec -it NODE-22-2 bash
root@c873fb41f508:/# cd tmp
root@c873fb41f508:/tmp# node -v
v22.2.0
root@c873fb41f508:/tmp# node test1.js
[ Promise { <pending> }, Promise { <pending> }, Promise { <pending> } ]
MyTransform Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:532:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
1
[
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:532:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
},
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:532:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
},
Promise { <pending> }
]
WriteStream Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:532:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
2
[
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:532:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
},
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:532:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
},
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:532:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
}
]
Pipeline error: Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at MyTransform.<anonymous> (node:internal/streams/pipeline:417:14)
at MyTransform.emit (node:events:532:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at emitErrorCloseNT (node:internal/streams/destroy:130:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
success
root@c873fb41f508:/tmp#
root@c873fb41f508:/tmp#
root@c873fb41f508:/tmp#
exit
```
### What do you see instead?
```
root@0cd652c8fcb1:/tmp# node test1.js
[ Promise { <pending> }, Promise { <pending> }, Promise { <pending> } ]
MyTransform Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
Unhandled Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
} Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
}
Unhandled Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
} Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
}
WriteStream Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
1
[
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
},
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
},
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
}
]
2
[
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
},
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
},
Promise {
<rejected> Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
}
]
Pipeline error: Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at ReadStream.onclose (node:internal/streams/end-of-stream:153:30)
at ReadStream.emit (node:events:531:35)
at emitCloseNT (node:internal/streams/destroy:148:10)
at process.processTicksAndRejections (node:internal/process/task_queues:89:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
success
(node:13) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 1)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:13) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 2)
root@0cd652c8fcb1:/tmp#
exit
```
### Additional information
_No response_ | stream | low | Critical |
2,554,409,972 | rust | Many Arm microcontoller target feature flags are missing | @RalfJung asked me to open an issue listing all the useful Arm microcontroller feature flags that are currently entirely unsupported by Rust, not even unstably.
These are all listed at https://doc.rust-lang.org/nightly/rustc/platform-support/arm-none-eabi.html and the sub-pages, as modified by https://github.com/rust-lang/rust/pull/130987.
Selecting a CPU with `-C target-cpu=xxx` causes LLVM to enable *all* the optional features of that type of CPU. However, sometimes CPUs are sold without certain features (e.g. you can get a Cortex-M4 either with or without an FPU). So, we use these `-C target-feature=...` flags to *disable* some of the things that LLVM over-enthusiastically enabled for us when we selected a target-cpu. If you don't select a target-cpu, you don't need these flags because by default, only the architecture's baseline features are enabled and you never want to turn those off.
* [ ] `-fpregs` - don't emit FPU instructions
* [ ] `-fp64` - don't emit double precision FPU instructions
* [ ] `-mve` - don't emit Float or Integer M-Profile Vector Extension instructions
* [ ] `-mve.fp` - don't emit Float M-Profile Vector Extension instructions
* [ ] `-dsp` - don't emit DSP instructions
* [ ] `+mve` - do emit Integer M-Profile Vector Extension instructions (used with `-fpregs` because MVE uses registers shared with the FPU and those registers are present if you have Integer MVE but no FPU)
These are alongside the following target CPUs (`-C target-cpu=...`):
* `cortex-m0`
* `cortex-m0plus`
* `cortex-m3`
* `cortex-m4`
* `cortex-m7`
* `cortex-m33`
* `cortex-m35p`
* `cortex-m55`
* `cortex-m85`
If you don't want to use `-C target-cpu...` then the following additional flags can *enable* certain features. However, these aren't currently mentioned in the documentation (except `+mve` because it's listed above).
* `+mve` - M-Profile Vector Extensions (integer)
* `+mve.fp` - M-Profile Vector Extensions (floating point)
* `+dsp` - DSP extensions
* `+fp-armv8d16sp` - single precision FPU for Armv8-M
* `+fp-armv8d16` - double precision FPU for Armv8-M
* `+vfp4d16sp` - single precision FPU for Armv7E-M
* `+vfp4d16` - double precision FPU for Armv7E-M
| T-compiler,A-ABI,A-target-feature | medium | Major |
2,554,460,364 | go | x/vulndb/internal/symbols: TestPatchedSymbols failures | ```
#!watchflakes
default <- pkg == "golang.org/x/vulndb/internal/symbols" && test == "TestPatchedSymbols"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735794537960806193)):
=== RUN TestPatchedSymbols
patched_functions_test.go:42: lstat testdata/module: no such file or directory
patched_functions_test.go:46: lstat testdata/fixed-module: no such file or directory
patched_functions_test.go:54: (-got, want+):
map[symbols.symKey]bool{
+ {pkg: "golang.org/module", symbol: "Foo"}: true,
+ {pkg: "golang.org/module/internal", symbol: "Bar"}: true,
}
patched_functions_test.go:42: lstat testdata/module: no such file or directory
patched_functions_test.go:46: lstat testdata/fixed-module: no such file or directory
patched_functions_test.go:54: (-got, want+):
map[symbols.symKey]bool{
+ {pkg: "golang.org/nestedmodule", file: "main_linux.go", symbol: "main"}: true,
}
--- FAIL: TestPatchedSymbols (0.00s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,vulncheck or vulndb | low | Critical |
2,554,466,568 | godot | Empty Resource metadata property disappears from inspector when reopening the scene | ### Tested versions
- Tested and confirmed on: v4.0.4, v4.2.2, v4.3 and v4.4-dev2.
- v3.6 did not have metadata in the inspector to test.
### System information
Windows 10 - Godot v4.3.0-stable
### Issue description
After creating a metadata property of type Resource and reopening the scene, the editor inspector does not show the property, even though it still exists in the scene's tscn file.
The property should still be visible, even if there is no value set.
### Steps to reproduce
1) Create a scene and add a metadata property of type Resource.
2) Do not add any value to it and save the scene.
3) Close and reopen the scene.
### Minimal reproduction project (MRP)
[Metadata Test.zip](https://github.com/user-attachments/files/17175944/Metadata.Test.zip)
It may not show, but the scene already has a metadata of type resource. | bug,topic:editor,needs testing | low | Minor |
2,554,474,758 | PowerToys | Image Resizer processes (shrinks) the same image multiple times (for a subset of images) | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Image Resizer
### Steps to reproduce
[PowerToysReport_2024-09-28-21-46-57.zip](https://github.com/user-attachments/files/17176083/PowerToysReport_2024-09-28-21-46-57.zip)
Prepare a bunch of images. I often work with sets of 50-150, so the issue might only occur on larger batches but not if you resize only 10 images at once.
I could not link the behaviour to anything. Same source (mobile phone), same resolution, no specific order e.g. by filename that would link to stuff like "every nth process".
I do overwrite in place.
Env: W11, 7950X3D, PCIe4 SSD (adding in case it's performance related).
### ✔️ Expected Behavior
Images are only processed once.
### ❌ Actual Behavior
Some (seemingly random) images are processed multiple times.
[edit1]
As in
Source 3472x4624px (WxH)
becomes 1080x1438px (WxH)
becomes 640x852px (WxH)
Wait a minute... was just editing to add this detail and calc the ratio (e.g. ratio applied double?).
Isn't 852 one of the default values for Small...?
https://learn.microsoft.com/en-us/windows/powertoys/image-resizer#settings
Ok it's 854x480. So not only different (yet close!) but also the wrong value (width instead of height).
I use "Medium" 1920x1080 in any case :)
[/edit 1]
[edit 2]
Just noticed something else. Width and Height are flipped anyway!?

Then 852/854 is suddenly becoming very interesting again, no? :)
[/edit2]




### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Major |
2,554,491,072 | pytorch | Python implementation of RNN in docstring is broken | ### 📚 The doc issue
`RNN`'s docstring provides a Python implementation of a forward pass. This was added in https://github.com/pytorch/pytorch/pull/119150 (resolving https://github.com/pytorch/pytorch/issues/109443) and is helpful since the actual forward pass is implemented with ATen primitives, via _VF virtual function calls.
However, the implementation has two issues:
1. It cannot be directly used, in the sense of:
```
class PyRNN(nn.RNN):
# paste in forward pass from documentation
...
pyrnn = PyRNN(10, 20, 2) # breaks!
```
This breaks primarily because forward implementation in the documentation is missing `self` references.
2. The implementation breaks for stacked RNNs (i.e. `num_layers >= 2`). The culprit is the line `x[t] @ weight_ih[layer].T`, which is correct for the first layer of the RNN, but incorrect for subsequent layers. In those subsequent layers, the LHS of the multiplication should be the previous hidden state to match the dimension of weight_ih[layer].T.
The documentation should either be "downgraded" to looser pseudocode or "upgraded" to a working Python implementation. I think the latter is the right choice, given the tutorial value; plus, will be a lightweight change.
### Suggest a potential alternative/fix
#1 can be fixed by adding the requisite `self`'s. Here's a quick working implementation that adds the requisite `self`'s.
```
def forward(self, x, h_0=None):
if self.batch_first:
x = x.transpose(0, 1)
seq_len, batch_size, _ = x.size()
if h_0 is None:
h_0 = torch.zeros(self.num_layers, batch_size, self.hidden_size)
h_t_minus_1 = h_0
h_t = h_0
output = []
for t in range(seq_len):
for layer in range(self.num_layers):
h_t[layer] = torch.tanh(
x[t] @ eval(f"self.weight_ih_l{layer}").T
+ eval(f"self.bias_ih_l{layer}")
+ h_t_minus_1[layer] @ eval(f"self.weight_hh_l{layer}").T
+ eval(f"self.bias_hh_l{layer}")
)
output.append(h_t[-1])
h_t_minus_1 = h_t
output = torch.stack(output)
if self.batch_first:
output = output.transpose(0, 1)
return output, h_t
```
(The `eval`'s are a hack though, will fix...)
#2 needs a closer look at the code inside the `for` loop.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: rnn,triaged | low | Critical |
2,554,492,719 | rust | Tracking Issue for File lock API | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(file_lock)]`
This is a tracking issue for https://github.com/rust-lang/libs-team/issues/412
This feature exposes advisory file locks on `File`. They allow a file handle to acquire an exclusive or shared file lock, which blocks other file handles to the same file from acquiring a conflicting lock. Some semantics are platform dependent, and these are documented in the API documentation.
<!--
Include a short description of the feature.
-->
### Public API
```rust
impl File {
fn lock(&self) -> io::Result<()>;
fn lock_shared(&self) -> io::Result<()>;
fn try_lock(&self) -> io::Result<bool>;
fn try_lock_shared(&self) -> io::Result<bool>;
fn unlock(&self) -> io::Result<()>;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] Implementation: #130999
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,proposed-final-comment-period,C-tracking-issue,disposition-merge | low | Major |
2,554,493,359 | rust | proc-macro span syntax context and hygiene is underspecified and underdocumented | zulip thread: https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/proc-macro.20span.20syntax.20context.20docs.20and.20best.20practices
Are there any clear docs or guidance about proc-macro span syntax context and hygiene? I'm asking because I've noticed that rustc lints, diagnostics, also clippy lints, all run into issues where we don't want to lint on or make suggestions on code that are generated by proc-macros, but `Span::from_expansion` returns `false` for proc-macro-generated spans that naively forwards user spans without adding distinguishable syntax context.
The best explanation about proc-macro span syntax context and hygiene I can find atm is the [The Little Book of Rust Macros](https://veykril.github.io/tlborm/proc-macros/hygiene.html) by @Veykril. AFAICT The reference on proc-macro span syntax context and hygiene is quite terse: https://doc.rust-lang.org/reference/procedural-macros.html?highlight=hygiene#procedural-macro-hygiene.
This causes issues where crate authors who write proc-macros use
> e.g. `m!(t)` and the span of `t` is used when generating code via `.set_span`/`quote_spanned!`/etc.
Example issue I ran into when trying to implement `unit_bindings`: https://github.com/rust-lang/rust/pull/112380#issuecomment-1657124150 (I think I now know the fix for Rocket isn't exactly reasoned correctly but happened to add sufficient Syntax Context to suppress the lint)
Example clippy issue: https://github.com/rust-lang/rust-clippy/issues/13458
From the perspective of `rustc` and `clippy`, *technically* it's "not our fault" because the user provided a span that has no distinguishing syntax context, but from the perspective of the user it's very confusing. It does not help rustc/clippy maintainers nor does it help users.
Zulip thread: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/proc-macro.20span.20hygiene
I don't know who's maintaining `proc-macro` or who has purview over the proc-macro API, I thought it was T-libs or T-libs-api but apparently it's not T-libs, and I don't know about T-libs-api. I found @petrochenkov in triagebot.toml, maybe you know more about this?
Tagging this as T-lang because it's part of the Rust language that is quite rough when in comes to interaction between compiler/tooling and user experience. | T-lang,T-compiler,A-docs,A-proc-macros,C-discussion,A-hygiene | low | Minor |
2,554,495,672 | godot | Documentation of InputEventMouseMotion Velocity is misleading | ### Tested versions
v4.4.dev2.official [97ef3c837]
### System information
Godot v4.4.dev2 - Windows 10.0.22631 - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 4070 (NVIDIA; 32.0.15.5585) - AMD Ryzen 7 5800X 8-Core Processor (16 Threads)
### Issue description
The documentation of the InputEventMouseMotion velocity and screen_velocity talks about how it should be used with Input.MOUSE_MODE_CAPTURED. Which is confusing as in captured mode both return `(0,0)`. Which is intended according to this: https://github.com/godotengine/godot/issues/32578 (it's an old one, so idk if this still is intended).
IMO either velocity should return a value in captured mode or the documentation should be adjusted to remove the note about captured mouse and instead document that it returns Vector2(0, 0) in that case.
Current documentation for reference:


### Steps to reproduce
add this script and see it printing (0,0). _If you comment out the mouse capture it prints a velocity_:
```
func _ready() -> void:
Input.mouse_mode = Input.MOUSE_MODE_CAPTURED
func _input(event: InputEvent) -> void:
if event is InputEventMouseMotion:
print(event.get_velocity())
print(event.get_screen_velocity())
```
### Minimal reproduction project (MRP)
N/A | bug,documentation,topic:input | low | Minor |
2,554,512,139 | flutter | Dialog animations not following Material Design Specification | ### Steps to reproduce
1. Create a dialog
2. Open the dialog (and see)
3. Close the dialog (and see)
### Expected results
The Material Design Specification states, "android components expand and collapse along the x or y axis as they enter and exit". The dialog widget should expand from the top to the bottom and collapse from the bottom to the top. I attached an example in the screenshot section.
https://m3.material.io/styles/motion/transitions/transition-patterns#a8acc8d4-8de2-4602-a9a3-945d44d08bad
### Actual results
The dialog fades into the view. This does not meet the requirements given by the specification, and more looks like a relict from MD2.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: MainScaffold(),
);
}
}
class MainScaffold extends StatelessWidget {
const MainScaffold({
super.key,
});
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: TextButton(
onPressed: () {
showDialog(
context: context,
builder: (context) {
return AlertDialog(
icon: const Icon(Icons.delete),
title: const Text("Permanently delete?"),
content: const Text(
"Deleting the selected messages will also remove them from synced devices."),
actions: [
TextButton(
onPressed: () => Navigator.pop(context),
child: const Text("Cancel")),
FilledButton.tonal(
onPressed: () => Navigator.pop(context),
child: const Text("Delete")),
]);
});
},
child: const Text("Open Dialog")),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
| Flutter behavior | Material specification (1 is android) |
|-|-|
|  |  |
</details>
### Logs
<details><summary>Logs</summary>
> Not required
</details>
### Flutter Doctor output
<details><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.22.2, on Microsoft Windows [Version 10.0.22631.4169], locale de-DE)
• Flutter version 3.22.2 on channel stable at E:\dev\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 761747bfc5 (4 months ago), 2024-06-05 22:15:13 +0200
• Engine revision edd8546116
• Dart version 3.4.3
• DevTools version 2.34.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\jakob\AppData\Local\Android\sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.9+0--11185874)
• All Android licenses accepted.
[√] Android Studio (version 2023.2)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.9+0--11185874)
[√] VS Code (version 1.93.1)
• VS Code at C:\Users\jakob\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.86.0
[√] Connected device (1 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 14 (API 34) (emulator)
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,a: animation,f: material design,a: fidelity,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.26 | low | Major |
2,554,527,118 | godot | [4.4.dev 2] ufbx importer does not find Synty textures | ### Tested versions
4.4.dev 2
### System information
Windows 11 Vulkan Forward +
### Issue description
I saw a similar report/post somewhere, but could not find it so I apologize if this is repeated somewhere.
The Synty assets that are released as their "Sources Files" contain .FBX files that reference links to Dropbox or the .psd image textures rather then the same file name with the .png extension that is actually contained in the files.
```
Resource file not found: res:// (expected type: Texture2D)
Can't open file from path 'res://Dropbox/SyntyStudios/PolygonHorrorMansion/_Working/_Textures/PolygonHorror_Texture_01.psd'.
modules/fbx/fbx_document.cpp:1088 - FBX: Image index '1' couldn't be loaded from path: res://Dropbox/SyntyStudios/PolygonHorrorMansion/_Working/_Textures/PolygonHorror_Texture_01.psd because there was no data to load. Skipping it.
Resource file not found: res:// (expected type: Texture2D)
Can't open file from path 'res://_Textures/PolygonHorror_Texture_01.psd'.
modules/fbx/fbx_document.cpp:1088 - FBX: Image index '0' couldn't be loaded from path: res://_Textures/PolygonHorror_Texture_01.psd because there was no data to load. Skipping it.
Resource file not found: res:// (expected type: Texture2D)
Can't open file from path 'res://PolygonShops/_Working/_Textures/PolygonShops_Walls_Texture_01.psd'.
modules/fbx/fbx_document.cpp:1088 - FBX: Image index '1' couldn't be loaded from path: res://PolygonShops/_Working/_Textures/PolygonShops_Walls_Texture_01.psd because there was no data to load. Skipping it.
Resource file not found: res:// (expected type: Texture2D)
Can't open file from path 'res://_Textures/Glass.psd'.
```
the ufbx importer does not appear, as a fallback, to look through the filesystem and find the matching filename with other extension types that can be used in place of the ones referenced that it is unable to find.
Additionally there appears to be no way to disable the ufbx importer so that the FBXState and FBXDocument classes can be used to override and "handle" the .fbx files.
### Steps to reproduce
I was importing .fbx files from their PolygonHorrorMansion Pack sourcefiles.
### Minimal reproduction project (MRP)
NA | needs testing,topic:import | low | Major |
2,554,537,411 | vscode | Color of placeholder text of git commit message does not respect setting | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.93.1 (user setup)
Commit: 38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40
Date: 2024-09-11T17:20:05.685Z
Electron: 30.4.0
ElectronBuildId: 10073054
Chromium: 124.0.6367.243
Node.js: 20.15.1
V8: 12.4.254.20-electron.0
OS: Windows_NT x64 10.0.22631
Steps to Reproduce:
1. Pick any theme
2. set ` "workbench.colorCustomizations": { "editor.placeholder.foreground": "#ff0000"}`
3. expected: placeholder text of git commit message should turn red (or to any color you set) - but actually it stays the default which is ` "editorGhostText.foreground"`. Changing the ghost text color indeed also changes this placeholder text color.
I checked using the developer tool that the placeholder text of git commit message is indeed controlled by `editor.placeholder.foreground`.
see screenshot below
(theme: default dark modern)

(theme: wine-bar-monokai)

| bug,scm | low | Critical |
2,554,539,116 | neovim | Option to disable writing extended attributes | ### Problem
Hi,
I am working for a large software company. Our codebase is remote, and mounted locally on the workstations. We also have various automation tools which pick up changes to the files and perform build, test, deployment, etc.
Ever since Neovim 0.10+, there appeared quite annoying behavior in this workflow. When you save the file in Neovim, our automation tools first pick up a change to the file having no contents (as if the file was written completely empty), the build/test fails, and only on the second attempt it succeeds.
I have debugged this problem and traced the root cause to writing extended attributes from Neovim. If I run `strace` on Neovim 0.9+ vs any commit after 0.10+, then I see the difference of making various xattr calls in Neovim 0.10.+ (see strace output attached below). Since there are many attributes to set, the file write operation is also significantly slower (although that is still tolerable).
I have also verified in a local build that if I remove the calls to [os_copy_xattr](https://github.com/neovim/neovim/blob/69553f7bf55c060733553d96a068c1104c885bce/src/nvim/os/fs.c#L790), then the issue disappears.
Do you think it would be possible to add a configuration option to disable the handling of extended attributes?
Here is the output of `strace -e trace=file --decode-fds=path -p xxx` showing a single ":w" call. Note that file names and attribute names are masked. See the block of xattr calls in the middle of the trace.
```
# These calls are
statx(AT_FDCWD</masked>, "/masked/filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14205, ...}) = 0
statx(AT_FDCWD</masked>, "/masked/filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14205, ...}) = 0
statx(AT_FDCWD</masked>, "/masked/filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14205, ...}) = 0
access("/masked/filename.cc", W_OK) = 0
statx(AT_FDCWD</masked>, "/masked/filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14205, ...}) = 0
statx(AT_FDCWD</masked>, "filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14205, ...}) = 0
access("filename.cc", W_OK) = 0
statx(AT_FDCWD</masked>, "filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14205, ...}) = 0
statx(AT_FDCWD</masked>, "filename.cc", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14205, ...}) = 0
statx(AT_FDCWD</masked>, "masked/4913", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, 0x7ffd0fa2f500) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD</masked>, "masked/4913", O_WRONLY|O_CREAT|O_EXCL|O_NOFOLLOW|O_CLOEXEC, 0100664) = 16</masked/masked/4913>
statx(AT_FDCWD</masked>, "masked/4913", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=0, ...}) = 0
unlink("masked/4913") = 0
statx(AT_FDCWD</masked>, "filename.cc~", AT_STATX_SYNC_AS_STAT, STATX_ALL, 0x7ffd0fa2f6e0) = -1 ENOENT (No such file or directory)
statx(AT_FDCWD</masked>, "filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14205, ...}) = 0
statx(AT_FDCWD</masked>, "filename.cc~", AT_STATX_SYNC_AS_STAT, STATX_ALL, 0x7ffd0fa2f380) = -1 ENOENT (No such file or directory)
unlink("filename.cc~") = -1 ENOENT (No such file or directory)
rename("filename.cc", "filename.cc~") = 0
openat(AT_FDCWD</masked>, "filename.cc", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0664) = 16</masked/filename.cc>
# These calls are made only in Neovim 0.10+
listxattr("filename.cc~", NULL, 0) = 122
listxattr("filename.cc~", "masked"..., 122) = 122
getxattr("filename.cc~", "masked", NULL, 0) = 103
getxattr("filename.cc~", "masked", NULL, 0) = 32
getxattr("filename.cc~", "masked", NULL, 0) = 16
getxattr("filename.cc~", "masked", NULL, 0) = 36
getxattr("filename.cc~", "masked", NULL, 0) = 64
getxattr("filename.cc~", "masked", NULL, 0) = 60
getxattr("filename.cc~", "masked", "masked"..., 103) = 103
setxattr("filename.cc", "masked", "masked"..., 103, 0) = -1 EOPNOTSUPP (Operation not supported)
getxattr("filename.cc~", "masked", "...", 103) = 32
setxattr("filename.cc", "masked", "...", 32, 0) = -1 EOPNOTSUPP (Operation not supported)
getxattr("filename.cc~", "masked", "...", 103) = 16
setxattr("filename.cc", "masked", "...", 16, 0) = -1 EOPNOTSUPP (Operation not supported)
getxattr("filename.cc~", "masked", "..."..., 103) = 36
setxattr("filename.cc", "masked", "..."..., 36, 0) = -1 EOPNOTSUPP (Operation not supported)
getxattr("filename.cc~", "masked", "..."..., 103) = 64
setxattr("filename.cc", "masked", "..."..., 64, 0) = -1 EOPNOTSUPP (Operation not supported)
getxattr("filename.cc~", "masked", "//masked"..., 103) = 60
setxattr("filename.cc", "masked", "//masked"..., 60, 0) = -1 EOPNOTSUPP (Operation not supported)
# End of Neovim xattr calls
statx(AT_FDCWD</masked>, "filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14206, ...}) = 0
statx(AT_FDCWD</masked>, "filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14206, ...}) = 0
chmod("filename.cc", 0100664) = 0
statx(AT_FDCWD</masked>, "/masked/filename.cc", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=14206, ...}) = 0
unlink("filename.cc~") = 0
```
### Expected behavior
Make it possible to disable copying of extended attributes via a configuration option. | enhancement,performance,bug-vim,needs:vim-patch,filesystem | low | Critical |
2,554,541,085 | godot | Moving Parallax2D node using the move tool with grid snap enabled causes rubber-banding effect. | ### Tested versions
- Reproducible in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 31.0.15.4633) - AMD Ryzen 5 5600G with Radeon Graphics (12 Threads)
### Issue description
Moving a Parallax2D node using the move tool with grid snap enabled causes a "rubber-banding" effect. The node's scroll offset will still be updated in the inspector but will not appear in the editor window until the position is adjusted using the inspector or the scene is reloaded (eg. via switching tabs).

A few things of note:
- Only reproducible if the "grid snap" option is enabled.
- Happens despite the `ignore_camera_scroll` property of the Parallax2D node being set to `true`.
- Happens despite the `screen_offset` property of the Parallax2D node being unmodified.
- Sometimes, the position *does* immediately appear in the editor. This happens once in the example provided above.
While I did read the following on the [documentation page](https://docs.godotengine.org/en/stable/classes/class_parallax2d.html#class-parallax2d) for the Parallax2D node:
> Note: Any changes to this node's position made after it enters the scene tree will be overridden if [ignore_camera_scroll](https://docs.godotengine.org/en/stable/classes/class_parallax2d.html#class-parallax2d-property-ignore-camera-scroll) is false or [screen_offset](https://docs.godotengine.org/en/stable/classes/class_parallax2d.html#class-parallax2d-property-screen-offset) is modified.
...This issue appears to be unrelated. If it is related or intended, I believe the engine should clarify that.
### Steps to reproduce
1. Create a new godot project and open the 2D view.
2. Create a new `Parallax2D` node.
3. *Optionally, add a `Sprite2D`as a child of the `Parallax2D` node. This will make the effect more visible.*
4. *Optionally, set `ignore_camera_scroll` to `true`. This does not appear to have any effect, but the [documentation](https://docs.godotengine.org/en/stable/classes/class_parallax2d.html#class-parallax2d) says you won't be able to move it otherwise.*
5. Enable grid-snapping in the toolbar.
6. Select the move tool and attempt to move the `Parallax2D` node.
### Minimal reproduction project (MRP)
[Parallax2D-moving-bug.zip](https://github.com/user-attachments/files/17176727/Parallax2D-moving-bug.zip)
| bug,topic:editor,usability,topic:2d | low | Critical |
2,554,561,171 | flutter | Remove all nullable generics in plugin Pigeon definitions | Now that #97848 is fixed, we should sweep all of our plugins for any nullable generics that were nullable only because of that issue (that's probably *any* nullable generic, but there may be some rare exceptions). Many have a TODO annotated with #97848, but we shouldn't assume they all have an annotation.
In general I would expect that we can just:
- Remove the nullability from generics.
- Update to the latest version of Pigeon.
- Remove any casting on the Dart side.
- Do any incidental changes for intervening Pigeon breaking changes (e.g., setup->setUp)
Since most of our plugins are using Obj-C and Java on the native side still, native changes will likely be minimal.
This will give us a nearly-free improvement in type safety, and also avoid having other developers who look to our plugins as examples thinking they still need the same workaround. | package,team-ecosystem,P1,triaged-ecosystem | medium | Minor |
2,554,564,257 | PowerToys | Use MIME Types Rather Than File Extensions to Choose Peek Preview Handler | ### Description of the new feature / enhancement
When Peek is used to preview a file with no extension, rather than display the standard Peek window shown when no preview can be displayed, use the MIME type to determine if the file can in fact be displayed.
### Scenario when this would be used?
It is not uncommon for a user to have at least some files on their drive with no extension. Currently, such files cannot be previewed in Peek (or Explorer's preview pane) without some (probably ill-advised) tweaking of the registry. (I believe) it is currently only possible by change the value of `PerceivedType` to`text/plain` for the registry key `HKCR\*`. Then, I encountered [this](https://github.com/LRN/mimerun) GitHub project, called `mimerun` which presents an interested idea.
### Supporting information
Here is the README Description of the **`mimerun`** Project:
<div align=justify>
Mimerun installs itself as a shell hook (shell hooks are supported in all versions of Windows since Windows XP, although they are disabled in Vista and later by default).
Whenever something (Windows Explorer, Windows command line shell, or any Windows program) calls **`ShellExecute()`** or **`ShellExecuteEx()`**, Mimerun catches that and, if target is a file, uses **`libmagic`** (the supporting library of the UNIX file(1) utility) to guess the type of target. Then it compares the result of type guessing with a set of Mimerun rules (written by user, stored in the registry) and, if a match is found, executes the command specified in the matching rule. Once a matching rule is found and its handler is executed successfully, Mimerun signals the caller of that success, stopping the shell from trying to handle the target. If no rules match, or the target is not a file, or if there were errors during the computation, Mimerun signals a failure and lets the shell find other ways to handle the target.
#### Shebang Support
Mimerun also supports shebangs, being able to use them exclusively to run scripts with correct interpreters, or supplementing a handler with them.
Commandline bridge is used to force the shell to pass any unknown files (files that do not match any of the shell's own file associations) to Mimerun. This allows Mimerun to process files that were double-clicked in Windows Explorer and were not handled by the shell.
Adding "." to `PATHEXT` environment variable allows files without extensions to be handled correctly.
</div> | Needs-Triage | low | Critical |
2,554,582,959 | yt-dlp | [YouTube] embed-metadata causes conversion failed in some videos. "Could not find tag for codec eac3" | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Some videos when extract audio and embed metadata are used will give the error "conversion failed". The audio file is still downloaded at least, but it leaves files behind (such as the temp file and the thumbnail file), and the audio file will fail to have any metadata. This doesn't happen when I try to download a video.
It only happens on these videos
https://youtu.be/1SEgoi7kjw8
https://youtu.be/VKzWLUQizz8
https://youtu.be/TNEQ2Z0cjh4 (mild nsfw?)
This issue is a problem to me because I run yt-dlp automatically on a daily basis. It archives many of my music playlists and channels I watch. I can't always manually moderate the files I am downloading to ensure they are all being downloaded properly. It would be nice to have a workaround that doesn't involve removing the metadata option, or adding to my script "rm *.webp". I mainly want to keep the title and artist
Thankfully, it seems that those are actually the only videos where I have this issue.
This issue might be relevant to #4838 #11020 and #9303
What stands out from the logs
```
[ipod @ 0x5b29100353c0] Could not find tag for codec eac3 in stream #0, codec not currently supported in container
[out#0/ipod @ 0x5b2910037940] Could not write header (incorrect codec parameters ?): Invalid argument
Conversion failed!
```
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--embed-metadata', '-x', '1SEgoi7kjw8']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.09.27 from yt-dlp/yt-dlp [c6387abc1]
[debug] Python 3.12.6 (CPython x86_64 64bit) - Linux-6.10.10-zen1-1-zen-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.0.2 (setts), ffprobe 7.0.2, rtmpdump 2.4
[debug] Optional libraries: brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, pycrypto-3.20.0, requests-2.32.3, sqlite3-3.46.1, urllib3-1.26.20
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.09.27 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.09.27 from yt-dlp/yt-dlp)
[youtube] Extracting URL: 1SEgoi7kjw8
[youtube] 1SEgoi7kjw8: Downloading webpage
[youtube] 1SEgoi7kjw8: Downloading ios player API JSON
[youtube] 1SEgoi7kjw8: Downloading web creator player API JSON
[debug] [youtube] Extracting signature function js_b0557ce3_113
[debug] Loading youtube-sigfuncs.js_b0557ce3_113 from cache
[debug] Loading youtube-nsig.b0557ce3 from cache
[debug] [youtube] Decrypted nsig oMghXuigoqF5lfDZr => yB19ty6QSdYr_g
[debug] Loading youtube-nsig.b0557ce3 from cache
[debug] [youtube] Decrypted nsig hAQsDVvSczaXJOm2G => W7aMiJw1WoQhFQ
[debug] [youtube] Extracting signature function js_b0557ce3_109
[debug] Loading youtube-sigfuncs.js_b0557ce3_109 from cache
[youtube] 1SEgoi7kjw8: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec:vp9.2, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec:vp9.2(10), channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[info] 1SEgoi7kjw8: Downloading 1 format(s): 328
[debug] Invoking http downloader on "https://rr1---sn-n02xgoxufvg3-2gb6.googlevideo.com/videoplayback?expire=1727592400&ei=cKP4Zo-aONXIi9oP5-rUqAU&ip=185.246.210.17&id=o-AByJSTMm8WXnoTKwZPjvKd4Y7DKvmZQQSXDsviBR-wqE&itag=328&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&mh=dd&mm=31%2C29&mn=sn-n02xgoxufvg3-2gb6%2Csn-4g5lznl7&ms=au%2Crdu&mv=m&mvi=1&pl=24&initcwndbps=693750&bui=AXLXGFQPpjzHbd9oBGqAhxseFPb9y00zU16oDP6rsDx5GLQxC5ADhVvOttcdXlF2TE_GXInlutV-V4xn&spc=54MbxTwfCJvktICLHHVxraWu5Eau_jz7H97C67AiGNZHSzGhpnzY&vprv=1&svpuc=1&mime=audio%2Fmp4&ns=EspYaqKJ7Jgq_kTKc62DzaEQ&rqh=1&gir=yes&clen=12734232&dur=265.216&lmt=1710115319472727&mt=1727570471&fvip=4&keepalive=yes&fexp=51299152%2C51300761&c=WEB_CREATOR&sefc=1&txp=4432434&n=W7aMiJw1WoQhFQ&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&lsparams=mh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=ABPmVW0wRQIgDiVE46SaLa4krGd4ZNjSVl_qw7kQMppIKEPWL47UJ6MCIQDnGf6y1JUh9ji0Te0woQH8EMXx8qwMWCO_G2w36pV41Q%3D%3D&sig=AJfQdSswRAIgPUQ9ii7r7ECEF7hJFjOO8xkOG6t7r0c95W9WnwPaWBMCIH22fvjEXB20PTBaAC1UML4DfxvdY5tc43Q_-1iSRX9Z"
[download] Justice - Waters Of Nazareth - † (Official Audio) [1SEgoi7kjw8].m4a has already been downloaded
[download] 100% of 12.14MiB
[ExtractAudio] Not converting audio Justice - Waters Of Nazareth - † (Official Audio) [1SEgoi7kjw8].m4a; the file is already in a common audio format
[Metadata] Adding metadata to "Justice - Waters Of Nazareth - † (Official Audio) [1SEgoi7kjw8].m4a"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Justice - Waters Of Nazareth - † (Official Audio) [1SEgoi7kjw8].m4a' -map 0 -dn -ignore_unknown -vn -acodec copy -write_id3v1 1 -metadata 'title=Justice - Waters Of Nazareth - † (Official Audio)' -metadata date=20121207 -metadata 'description=“HYPERDRAMA” new album pre-save: https://justice.lnk.to/hyperdramayo
Justice “One Night/All Night” (Starring Tame Impala) out now: https://justice.lnk.to/onanyo
//
Taken from the album “†” available on all platforms: https://BecauseMusic.lnk.to/Justice
Subscribe to the channel: http://bit.ly/JusticeChannel
Listen to the essentials from Justice here: https://lnk.to/JusticeEssentials
Justice : Gaspard Augé and Xavier de Rosnay
Official website → https://justice.church
Follow Justice:
Facebook: http://facebook.com/etjusticepourtous
Instagram: http://instagram.com/etjusticepourtous
© 2007 Ed Banger Records / Because Music
#Justice' -metadata 'synopsis=“HYPERDRAMA” new album pre-save: https://justice.lnk.to/hyperdramayo
Justice “One Night/All Night” (Starring Tame Impala) out now: https://justice.lnk.to/onanyo
//
Taken from the album “†” available on all platforms: https://BecauseMusic.lnk.to/Justice
Subscribe to the channel: http://bit.ly/JusticeChannel
Listen to the essentials from Justice here: https://lnk.to/JusticeEssentials
Justice : Gaspard Augé and Xavier de Rosnay
Official website → https://justice.church
Follow Justice:
Facebook: http://facebook.com/etjusticepourtous
Instagram: http://instagram.com/etjusticepourtous
© 2007 Ed Banger Records / Because Music
#Justice' -metadata 'purl=https://www.youtube.com/watch?v=1SEgoi7kjw8' -metadata 'comment=https://www.youtube.com/watch?v=1SEgoi7kjw8' -metadata artist=Justice -metadata:s:0 language=eng -movflags +faststart 'file:Justice - Waters Of Nazareth - † (Official Audio) [1SEgoi7kjw8].temp.m4a'
[debug] ffmpeg version n7.0.2 Copyright (c) 2000-2024 the FFmpeg developers
built with gcc 14.2.1 (GCC) 20240910
configuration: --prefix=/usr --disable-static --disable-stripping --enable-amf --enable-avisynth --enable-libfontconfig --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libdav1d --enable-libdrm --enable-libdvdnav --enable-libdvdread --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libharfbuzz --enable-libiec61883 --enable-libjack --enable-libjxl --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libplacebo --enable-libpulse --enable-librav1e --enable-librsvg --enable-librubberband --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpl --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-libzimg --enable-nvdec --enable-nvenc --enable-opencl --enable-opengl --enable-shared --enable-vapoursynth --enable-version3 --enable-vulkan --enable-alsa --enable-bzlib --enable-iconv --enable-libxcb-shm --enable-libxcb-xfixes --enable-libxcb-shape --enable-lzma --enable-sdl2 --enable-xlib --enable-zlib --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-v4l2-m2m --enable-vaapi --enable-vdpau --enable-librist --enable-lto --enable-libsvtav1 --enable-libvmaf --enable-cuda-llvm --disable-cuvid --disable-debug --disable-sndio
libavutil 59. 8.100 / 59. 8.100
libavcodec 61. 3.100 / 61. 3.100
libavformat 61. 1.100 / 61. 1.100
libavdevice 61. 1.100 / 61. 1.100
libavfilter 10. 1.100 / 10. 1.100
libswscale 8. 1.100 / 8. 1.100
libswresample 5. 1.100 / 5. 1.100
libpostproc 58. 1.100 / 58. 1.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'file:Justice - Waters Of Nazareth - † (Official Audio) [1SEgoi7kjw8].m4a':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomdby1iso2mp41
encoder : Lavf61.1.100
Duration: 00:04:25.22, start: 0.000000, bitrate: 384 kb/s
Stream #0:0[0x1](und): Audio: eac3 (ec-3 / 0x332D6365), 48000 Hz, 5.1(side), fltp, 384 kb/s (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]
Side data:
audio service type: main
Stream mapping:
Stream #0:0 -> #0:0 (copy)
[ipod @ 0x589b513873c0] Could not find tag for codec eac3 in stream #0, codec not currently supported in container
[out#0/ipod @ 0x589b51389940] Could not write header (incorrect codec parameters ?): Invalid argument
Conversion failed!
ERROR: Postprocessing: Conversion failed!
Traceback (most recent call last):
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3561, in process_info
replace_info_dict(self.post_process(dl_filename, info_dict, files_to_move))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3743, in post_process
info = self.run_all_pps('post_process', info, additional_pps=info.get('__postprocessors'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3725, in run_all_pps
info = self.run_pp(pp, info)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3703, in run_pp
files_to_delete, infodict = pp.run(infodict)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/postprocessor/common.py", line 23, in run
ret = func(self, info, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/postprocessor/common.py", line 128, in wrapper
return func(self, info)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/postprocessor/ffmpeg.py", line 712, in run
self.run_ffmpeg_multiple_files(
File "/usr/lib/python3.12/site-packages/yt_dlp/postprocessor/ffmpeg.py", line 330, in run_ffmpeg_multiple_files
return self.real_run_ffmpeg(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/postprocessor/ffmpeg.py", line 368, in real_run_ffmpeg
raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1])
yt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!
```
| bug,external issue,site:youtube,core:post-processor | low | Critical |
2,554,668,442 | PowerToys | shortcut key conflict | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
While powertoys is running in the background, holding down the shift key and a number key will not enter the character above the number, and holding down the shift key and an alphabet key will not enter a capital letter. After closing powertoys, everything returns to normal.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,554,674,363 | pytorch | The error message of `nn.RNN()` for the input tensor should say ValueError: the dtype of the `input` tensor and `RNN()` must be the same but got `...` and `...` respectively | ### 🐛 Describe the bug
Setting a `float64` tensor to [nn.RNN()](https://pytorch.org/docs/stable/generated/torch.nn.RNN.html) gets the error message as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]], dtype=torch.float64) # float64
torch.manual_seed(42)
rnn = nn.RNN(input_size=3, hidden_size=2)
rnn(input=my_tensor) # Error
```
> ValueError: input must have the type torch.float32, got type torch.float64
And, setting a `complex64` tensor to `nn.RNN()` gets the error message as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8.+0.j, -3.+0.j, 5.+0.j]]) # complex64
torch.manual_seed(42)
rnn = nn.RNN(input_size=3, hidden_size=2)
rnn(input=my_tensor) # Error
```
> ValueError: input must have the type torch.float32, got type torch.complex64
But setting a `float64` tensor or `complex64` tensor to `nn.RNN()` with `dtype=torch.float64` or `dtype=torch.complex64` respectively works as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]], dtype=torch.float64) # float64
torch.manual_seed(42)
# ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
rnn = nn.RNN(input_size=3, hidden_size=2, dtype=torch.float64)
rnn(input=my_tensor)
# (tensor([[-1.0000, -0.9999]], dtype=torch.float64, grad_fn=<SqueezeBackward1>),
# tensor([[-1.0000, -0.9999]], dtype=torch.float64, grad_fn=<SqueezeBackward1>))
my_tensor = torch.tensor([[8.+0.j, -3.+0.j, 5.+0.j]]) # complex64
torch.manual_seed(42)
# ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
rnn = nn.RNN(input_size=3, hidden_size=2, dtype=torch.complex64)
rnn(input=my_tensor)
# (tensor([[ 0.9965-0.0006j, -0.9936-0.0332j]], grad_fn=<SqueezeBackward1>),
# tensor([[ 0.9965-0.0006j, -0.9936-0.0332j]], grad_fn=<SqueezeBackward1>))
```
I think the error messages should be something like as shown below:
> ValueError: the dtype of the `input` tensor and `RNN()` must be the same but got `float64` and `float32` respectively
> ValueError: the dtype of the `input` tensor and `RNN()` must be the same but got `complex64` and `float32` respectively
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: rnn,triaged | low | Critical |
2,554,679,518 | storybook | [Bug]: Storybook v8.1.6 Deployment Error: JSyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSON 正文 Describe the bugAfter deploying our Storybook v8.1.6, we're encountering issues that don't appear in our local environment. we use vite +ts | ### Describe the bug
1. JSON Syntax ErrorUpon accessing the deployed Storybook in production, the 'View error' button reports the following error:SyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSONThis suggests an issue with resource loading, possibly an attempt to load an HTML document where a JSON file was expected.

### Reproduction link
test
### Reproduction steps
_No response_
### System
```bash
1. JSON Syntax ErrorUpon accessing the deployed Storybook in production, the 'View error' button reports the following error:SyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSONThis suggests an issue with resource loading, possibly an attempt to load an HTML document where a JSON file was expected.

```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,554,685,594 | pytorch | `input_size` argument of `nn.RNN()` gets indirect error messages | ### 🐛 Describe the bug
Setting the float value `3.` to `input_size` of [nn.RNN()](https://pytorch.org/docs/stable/generated/torch.nn.RNN.html) gets the indirect error message as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]])
torch.manual_seed(42)
# ↓↓
rnn = nn.RNN(input_size=3., hidden_size=2) # Error
rnn(input=my_tensor)
```
> TypeError: empty(): argument 'size' failed to unpack the object at pos 2 with error "type must be tuple of ints,but got float"
And, setting the boolean value `True` to `input_size` of `nn.RNN()` gets the indirect error message as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]])
torch.manual_seed(42)
# ↓↓↓↓
rnn = nn.RNN(input_size=True, hidden_size=2)
rnn(input=my_tensor) # Error
```
> RuntimeError: input.size(-1) must be equal to input_size. Expected True, got 3
So, the error messages should be something direct like as shown below:
> TypeError: `input_size` argument must be `int` but got `float`
> TypeError: `input_size` argument must be `int` but got `bool`
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: rnn,triaged | low | Critical |
2,554,689,708 | pytorch | Setting the boolean value `True` to `hidden_size` argument of `nn.RNN()` gets a indirect error message | ### 🐛 Describe the bug
Setting the boolean value `True` to `hidden_size` argument of [nn.RNN()](https://pytorch.org/docs/stable/generated/torch.nn.RNN.html) gets the indirect error message as shown bleow:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]])
torch.manual_seed(42)
# ↓↓↓↓
rnn = nn.RNN(input_size=3, hidden_size=True) # Error
```
```
TypeError: empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
```
While setting the float value `1.` to `hidden_size` argument of `nn.RNN()` gets the correct error message as shown bleow:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]])
torch.manual_seed(42)
# ↓↓
rnn = nn.RNN(input_size=3, hidden_size=1.) # Error
```
> TypeError: hidden_size should be of type int, got: float
So, the wrong error message should be corrected as shown below:
> TypeError: hidden_size should be of type int, got: bool
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: rnn,triaged | low | Critical |
2,554,691,531 | electron | restore upstream `CHECK_EQ(IsGuest()...)` test | [This Chromium roll](https://github.com/electron/electron/pull/43948) removes a new upstream `CHECK_EQ()` in `WebContentsImpl::CreateNewWindow()`:
```diff
- // While some guest types do not have a guest SiteInstance, the ones that
- // don't all override WebContents creation above.
- CHECK_EQ(source_site_instance->IsGuest(), IsGuest());
bool is_guest = IsGuest();
```
That check is failing in Electron, both in CI runs and in manual testing.
I tried injecting just that one `CHECK_EQ()` line into `main` on its own (in 27d2a8f9e28715c3da6e8570bbc12d89cfa88178) and it fails there, too. So whatever's going on is not new to this roll; some pre-existing condition in Electron is causing `IsGuest()` and `source_site_instance->IsGuest()` to have different values.
This ticket's TODO is to understand why and figure out if it can be solved in Electron's code instead of in a patch. | component/webcontents,upgrade-follow-up,stale | low | Minor |
2,554,697,398 | deno | /etc/hosts seems to be ignored | Version: Deno 1.46.3 and 2.0.0-rc.7
OS: Macos Sequoia
Hello
I am trying to use this code:
```ts
import dns from "node:dns";
dns.lookup("mongo-1", (err, address, family) => {
console.error(err);
console.log("address: %j family: IPv%s", address, family);
});
```
I also have the issue with the `mongodb` package from NPM.
It prints:
```bash
Error: getaddrinfo ENOTFOUND mongo-1
at __node_internal_captureLargerStackTrace (ext:deno_node/internal/errors.ts:93:9)
at __node_internal_ (ext:deno_node/internal/errors.ts:246:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:37:26)
at ext:deno_node/internal_binding/cares_wrap.ts:78:9
at eventLoopTick (ext:core/01_core.js:175:7) {
errno: -3007,
code: "ENOTFOUND",
syscall: "getaddrinfo",
hostname: "mongo-1"
}
address: %j family: IPvundefined undefined
```
`/etc/hosts`
```txt
127.0.0.1 mongo-1
```
I tested using bash (netcat) and mongoDB Compass and the name `mongo-1` works as expected.
I tried this command as well: `sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder` without success.
Thank you.
| bug,node compat | low | Critical |
2,554,697,471 | pytorch | Setting wrong values to `num_layers` of `nn.RNN()` gets indirect error messages | ### 🐛 Describe the bug
Setting a float value to `num_layers` of [nn.RNN()](https://pytorch.org/docs/stable/generated/torch.nn.RNN.html) gets the indirect error message as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]])
# ↓↓
rnn = nn.RNN(input_size=3, hidden_size=1, num_layers=1.) # Error
rnn(input=my_tensor)
```
> TypeError: 'float' object cannot be interpreted as an integer
And, setting a complex value to `num_layers` of `nn.RNN()` also gets the indirect error message as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]])
# ↓↓↓↓↓↓
rnn = nn.RNN(input_size=3, hidden_size=1, num_layers=1.+0.j) # Error
rnn(input=my_tensor)
```
> TypeError: '<=' not supported between instances of 'complex' and 'int'
And, setting a bool value to `num_layers` of `nn.RNN()` also gets the indirect error message as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]])
# ↓↓↓↓
rnn = nn.RNN(input_size=3, hidden_size=1, num_layers=True)
rnn(input=my_tensor) # Error
```
```
TypeError: rnn_tanh() received an invalid combination of arguments - got (Tensor, Tensor, list, bool, bool, float, bool, bool, bool), but expected one of:
* (Tensor data, Tensor batch_sizes, Tensor hx, tuple of Tensors params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional)
didn't match because some of the arguments have invalid types: (Tensor, Tensor, !list of [Parameter, Parameter, Parameter, Parameter]!, !bool!, bool, !float!, !bool!, bool, bool)
* (Tensor input, Tensor hx, tuple of Tensors params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first)
didn't match because some of the arguments have invalid types: (Tensor, Tensor, !list of [Parameter, Parameter, Parameter, Parameter]!, bool, !bool!, float, bool, bool, bool)
```
So, they should be something direct like as shown below:
> TypeError: num_layers argument must be int but got float
> TypeError: num_layers argument must be int but got complex
> TypeError: num_layers argument must be int but got bool
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: rnn,triaged | low | Critical |
2,554,715,952 | opencv | Allow use of system flatbuffers | ### Describe the feature and motivation
Currently, OpenCV only supports use of bundled 3rdparty flatbuffers. Adding an option like Protobuf to decide whether finding or building is much better.
### Additional context
https://github.com/opencv/opencv/blob/4.x/cmake/OpenCVDetectFlatbuffers.cmake | feature,category: build/install,category: dnn | low | Minor |
2,554,723,710 | deno | Failed to get cpu info crash on aarch64 | The latest Deno 2 release candidate crashes inside the Node compatibility function `os.cpus()` when run on an Android device under Termux. To reproduce:
```
$ curl -O https://dl.deno.land/release/v2.0.0-rc.7/deno-aarch64-unknown-linux-gnu.zip
$ unzip deno-aarch64-unknown-linux-gnu.zip
$ apt install glibc-repo
$ apt install glibc-runner
$ grun ./deno eval 'console.log("success")'
success
$ echo 'import {cpus} from "node:os";' >cpus.js
$ echo 'console.log(cpus());' >>cpus.js
$ grun ./deno --allow-sys=cpus cpus.js
error: Uncaught (in promise) TypeError: Failed to get cpu info
at cpus (node:os:63:10)
at file:///data/data/com.termux/files/home/cpus.js:2:13
```
I don't have another ARM device handy, but I'm curious whether this affects all aarch64 machines or just the Termux glibc wrapper script.
Version: Deno 2.0.0-rc.7
| needs investigation,node compat | low | Critical |
2,554,727,026 | rust | Can we avoid the heap allocation in macOS Mutex/Condvar? | This was brought up over [here](https://github.com/rust-lang/rust/issues/93740#issuecomment-1992275013): our current macOS implementation for `Mutex` and `Condvar` use heap allocations. That's unfortunate because it can lead to OOM, and also seems like a slight efficiency hit.
Heap allocations are required because we are using the pthread API on macOS, and pthread Mutexes are not movable. AFAIK, @joboet has been working on alternative implementations that avoid the pthread API (https://github.com/rust-lang/rust/pull/122408). The alternative, suggested by @daira , is to get Apple to guarantee that their pthread mutexes are movable under certain conditions. Given the black box that Apple is, I have no idea if that's even remotely realistic. But anyway it seems worth tracking this somewhere so here we go. :) | O-macos,T-libs,A-atomic | low | Minor |
2,554,744,337 | go | proposal: spec: Values assigned to multi-case branches in type switches should have generics-style | ### Go Programming Experience
Experienced
### Other Languages Experience
C, C++, Python, Rust, Haskell
### Related Idea
- [ ] Has this idea, or one like it, been proposed before?
- [ ] Does this affect error handling?
- [X] Is this about generics?
- [X] Is this change backward compatible? Breaking the Go 1 compatibility guarantee is a large cost and requires a large benefit
### Has this idea, or one like it, been proposed before?
https://github.com/golang/go/issues/65031 seems similar, although the details differ.
https://github.com/golang/go/issues/57644 is possibly related, but I can't tell from its description what effect (if any) it would have on type switches.
### Does this affect error handling?
No
### Is this about generics?
Yes -- this proposal is to make multi-case type switches match the semantics of generic functions with a set of permitted types.
### Proposal
In Go 1.23, a value assigned as the match result in a multi-case type switch retains the original type being inspected. This prevents it from being passed to a function with a type parameter constraint that's only satisfied by the narrowed type.
```go
func fmtUint[T uint8 | uint16 | uint32 | uint64](value T) string {
return strconv.FormatUint(uint64(value), 10)
}
// error: any does not satisfy uint8 | uint16 | uint32 | uint64
func fmtValue(value any) string {
switch v := value.(type) {
case uint8, uint16, uint32, uint64:
// `v` is known to be of the above listed concrete types,
// but its type remains `interface{}`, so it can't be used as `T`.
return fmtUint(v)
}
return fmt.Sprintf("%v", value)
}
// semantically equivalent, and valid in Go 1.23, but overly verbose
func fmtValue(value any) string {
switch v := value.(type) {
case uint8:
return fmtUint(v)
case uint16:
return fmtUint(v)
case uint32:
return fmtUint(v)
case uint64:
return fmtUint(v)
}
return fmt.Sprintf("%v", value)
}
```
I propose to change the semantics of multi-case type switches so that the type of the matched variable becomes the intersection of the matched types, plus the original type:
```go
interface {
// To preserve existing behavior, `v` can be treated as its original type.
any
// `v` can also satisfy type parameter conditions that permit all of the types
// in its case match.
( uint8 | uint16 | uint32 | uint64 )
}
```
The matched case must have types that are a subset of the generic type:
```go
// OK: `v` might be a `uint8` or `uint16`, but both of those are acceptable to `fmtUint`
func fmtValue(value any) string {
switch v := value.(type) {
case uint8, uint16:
return fmtUint(v)
}
return fmt.Sprintf("%v", value)
}
// Error: `v` might be `uintptr`, so the set of possible types is a superset of those
// accepted by`fmtUint`.
func fmtValue(value any) string {
switch v := value.(type) {
case uint8, uint16, uint32, uint64, uintptr:
return fmtUint(v)
}
return fmt.Sprintf("%v", value)
}
```
The type of `v` should also work with the semantics of a generic type parameter for regular code within the case, for example performing conversions that are valid for all matchable types:
```go
// OK: all of the matched types can be cast to uint64.
func fmtValueHex(value any) string {
switch v := value.(type) {
case uint8, uint16, uint32, uint64:
return strconv.FormatUint(uint64(v), 16)
}
return fmt.Sprintf("%v", value)
}
```
### Language Spec Changes
_No response_
### Informal Change
_No response_
### Is this change backward compatible?
I think so? Given existing and proposed behavior, I believe that any existing code would continue to compile and run without changes.
### Orthogonality: How does this change interact or overlap with existing features?
_No response_
### Would this change make Go easier or harder to learn, and why?
_No response_
### Cost Description
_No response_
### Changes to Go ToolChain
_No response_
### Performance Costs
_No response_
### Prototype
_No response_ | LanguageChange,Proposal,Proposal-Hold,generics,LanguageChangeReview | low | Critical |
2,554,746,236 | opencv | Use universal intrinsics in imgwarp.cpp | ### Describe the feature and motivation
Currently, there are a lot of `CV_SIMD128` in https://github.com/opencv/opencv/blob/4.x/modules/imgproc/src/imgwarp.cpp and some SSE4_1, AVX2, LASX SIMD implementation. I suggest using universal intrinsics instead.
This is a following issue of https://github.com/opencv/opencv/issues/26185
### Additional context
_No response_ | feature | low | Minor |
2,554,747,729 | PowerToys | PowerToys Run does not work | ### Microsoft PowerToys version
0.84.1
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
in my laptop with Windows 11 power toys run as expected, but in job pc with windows 10 power toys not working, can anyone help with that?
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,554,768,044 | fastapi | Required with Ellipsis may not work | ### Privileged issue
- [ ] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
https://fastapi.tiangolo.com/tutorial/query-params-str-validations/#required-parameters

python interpreter:3.12.4
fastapi version:0.115.0
my codes:
```
from __future__ import annotations
from typing import Annotated
import uvicorn
from fastapi import FastAPI, Query
app = FastAPI()
@app.get("/items/")
async def read_items(q: Annotated[str, Query(min_length = 3)] = ...):
"""
curl -X 'GET' 'http://127.0.0.1:18081/items/' -H 'accept: application/json'
ValueError: [TypeError("'ellipsis' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]
"""
results = {"items": [{"item_id": "Foo"}, {"item_id": "Bar"}]}
if q:
results.update({"q": q})
return results
if __name__ == '__main__':
uvicorn.run(app, host = '127.0.0.1', port = 18081)
```
swagger docs:


ValueError: [TypeError("'ellipsis' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]
| docs,confirmed | low | Critical |
2,554,793,235 | PowerToys | shortcuts with pressed Alt don't work when keymap is changed | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I'm using Keyboard Manager with Microsoft Japanese IME. I've defined some shortcuts to input polish letters and everything works fine, until I change the keymap. After the change (to Polish (Programmers) or US International), keyboard with pressed Alt doesn't work as expected.
- set Polish (Programmers) and Japanese IME keymap.
- define shortcut mapping to send polish letters, e.g. Alt (Right) + A -> ą; Alt (Right) + c -> ć
- keep right Alt pressed and press ac
- change the keymap
- keep right Alt pressed and press ac
### ✔️ Expected Behavior
Remapping works with constantly pressed right Alt key. I.e. ąć
### ❌ Actual Behavior
After changing the keymap when right Alt is continuously pressed only the first shortcut send proper text. I.e. ąc
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,554,795,678 | rust | wasm32_wasip1_threads's llvm_target is mistake | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
about /compiler/rustc_target/src/spec/targets/wasm32_wasip1_threads.rs
https://github.com/rust-lang/rust/blob/master/compiler/rustc_target/src/spec/targets/wasm32_wasip1_threads.rs#L61
now:
```rust:
llvm_target: "wasm32-wasi".into(),
```
I think:
```rust:
llvm_target: "wasm32-wasi-threads".into(),
```
--target wasm32-wasi: After this argument, the linked .o files are treated as wasm32-wasi.
```
= note: wasm-ld: error: --shared-memory is disallowed by dlmalloc.o because it was not compiled with the 'atomics' or 'bulk-memory' features.
```
If you change it, it will be linkable.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
| A-linkage,T-compiler,O-wasm,C-bug | low | Critical |
2,554,805,123 | rust | GAT `type Assoc<T: ?Sized>` implicitly requires `Self` to be `'static` | I tried this code:
```rust
trait Layout {}
trait Desc {
type Children<A: ?Sized>;
fn stage(_children: &Self::Children<dyn Layout>);
}
fn stage<D: Desc>(children: D::Children<dyn Layout>) {
D::stage(&children);
}
```
This doesn't compile with two errors, and I can't explain why:
```
error[E0310]: the parameter type `D` may not live long enough
--> crates/sandbox/src/main.rs:68:5
|
68 | D::stage(&children);
| ^^^^^^^^^^^^^^^^^^^
| |
| the parameter type `D` must be valid for the static lifetime...
| ...so that the type `D` will meet its required lifetime bounds
|
help: consider adding an explicit lifetime bound
|
67 | fn stage<D: Desc + 'static>(children: D::Children<dyn Layout>) {
| +++++++++
error[E0597]: `children` does not live long enough
--> crates/sandbox/src/main.rs:68:14
|
67 | fn stage<D: Desc>(children: D::Children<dyn Layout>) {
| -------- binding `children` declared here
68 | D::stage(&children);
| ---------^^^^^^^^^-
| | |
| | borrowed value does not live long enough
| argument requires that `children` is borrowed for `'static`
69 | }
| - `children` dropped here while still borrowed
```
Why is the `D` required to be `'static` here?
Why does the `&children` need to have a `'static` lifetime here as well?
---
Here are variations that do compile, but I also can't explain why they compile:
<details><summary>Adding `dyn Layout + 'static` in the trait definition</summary>
<p>
```rust
trait Layout {}
trait Desc {
type Children<A: ?Sized>;
fn stage(_children: &Self::Children<dyn Layout + 'static>);
}
fn stage<D: Desc>(children: D::Children<dyn Layout>) {
D::stage(&children);
}
```
</p>
</details>
<details><summary>Adding `dyn Layout + '_` in the trait definition</summary>
<p>
```rust
trait Layout {}
trait Desc {
type Children<A: ?Sized>;
fn stage(_children: &Self::Children<dyn Layout + '_>);
}
fn stage<D: Desc>(children: D::Children<dyn Layout>) {
D::stage(&children);
}
```
</p>
</details>
This variation doesn't compile, but it removes the `` `D` must be valid for the static lifetime`` error and I also don't understand why that is:
```rust
trait Layout {}
trait Desc {
type Children<A: ?Sized>: 'static;
fn stage(_children: &Self::Children<dyn Layout>);
}
fn stage<D: Desc>(children: D::Children<dyn Layout>) {
D::stage(&children);
}
```
There is definitely something implicit going on here which I don't know. Some helpful people suggested this may be related to https://github.com/rust-lang/rust/issues/87479, but I don't see how.
@nikomatsakis do you have an idea if this is related? Is this some compiler bug or smth not documented?
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
| A-diagnostics,A-lifetimes,A-borrow-checker,T-compiler,D-terse,A-trait-objects | low | Critical |
2,554,819,495 | ant-design | RangePicker the triangle sign was not positioned correctly in some cases | ### Reproduction link
[](https://stackblitz.com/edit/react-2hs74j?file=demo.tsx)
### Steps to reproduce
click any RangePicker component
### What is expected?
the triangle mark should be in the correct position
### What is actually happening?
the triangle sign always seems to be on the left
| Environment | Info |
| --- | --- |
| antd | 5.21.1 |
| React | 18.3.1 |
| System | Windows 11 |
| Browser | Version 129.0.6668.60 (Official Build) (64-bit) |
To reproduce, I changed the demo to a `flex-end` layout. Normally, when a search form is used and the `datepicker` component is the last element in a row

<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
| 🐛 Bug,help wanted,Inactive | low | Minor |
2,554,832,520 | pytorch | Optimize reading code generated via pytorch codegen in triton kernel | ### 🐛 Describe the bug
I run a model with torch.compile, and I notice that a triton kernel generated with some code line like this:(just in simplified expressions)
1. tmp0 = tl.load(in_ptr0 + xxxx)
2. tmp1 = tl.load(in_ptr1 + xxx)
3. tmp6 = tl.load(in_ptr0 +xxx)
4. tmp7 = tl.load(in_ptr1 + xxx)
5. tmp19 = tl.load(in_ptr0 + xxx)
6. tmp20 = tl.load(in_ptr1 + xxx)
7. tmp30 = tl.load(in_ptr0 +xxx)
8. tmp31 = tl.load(in_ptr1 +xxx)
and I adjust the reading order use the following method: four consecutive statements to read the in_ptr1 address, and then four consecutive statements to read in_ptr0. This solution can bring 8% performance improvement for this triton op。
So could we optimize it as a general method for codegen triton op generated in Pytorch?
### Error logs
_No response_
### Minified repro
_No response_
### Versions
torch2.3
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | needs reproduction,triaged,oncall: pt2,module: inductor | low | Critical |
2,554,842,752 | godot | net8.0-android failed to build, assembly not found | ### Tested versions
- Reproducible in: 4.3.stable.mono.official.77dcf97d8
### System information
Godot v4.3.stable.mono - Manjaro Linux #1 SMP PREEMPT_DYNAMIC Wed Aug 7 16:19:28 UTC 2024 - Wayland - GLES3 (Compatibility) - AMD Radeon Vega 3 Graphics (radeonsi, raven2, LLVM 18.1.8, DRM 3.57, 6.9.12-3-MANJARO) - AMD Ryzen 3 3250U with Radeon Graphics (4 Threads)
### Issue description
I am forced to use net8.0-android because I need to have SkiaSharp.NativeAssets.Android for APK.
When building APK, it emits this:

Though, if I use net8.0 it will be built. But libSkiaSharp dll won't be found (during using the application) [messages from logcat]
```
09-29 13:49:30.972 9234 9354 E godot : USER ERROR: System.TypeInitializationException: The type initializer for 'SkiaSharp.SKImageInfo' threw an exception.
09-29 13:49:30.972 9234 9354 E godot : ---> System.DllNotFoundException: libSkiaSharp
09-29 13:49:30.972 9234 9354 E godot : at SkiaSharp.SKImageInfo..cctor()
09-29 13:49:30.972 9234 9354 E godot : --- End of inner exception stack trace ---
09-29 13:49:30.972 9234 9354 E godot : at SkiaSharp.Views.Godot.SKControl._Draw()
09-29 13:49:30.972 9234 9354 E godot : at Godot.CanvasItem.InvokeGodotClassMethod(godot_string_name& method, NativeVariantPtrArgs args, godot_variant& ret)
09-29 13:49:30.972 9234 9354 E godot : at Godot.Control.InvokeGodotClassMethod(godot_string_name& method, NativeVariantPtrArgs args, godot_variant& ret)
09-29 13:49:30.972 9234 9354 E godot : at SkiaSharp.Views.Godot.SKControl.InvokeGodotClassMethod(godot_string_name& method, NativeVariantPtrArgs args, godot_variant& ret)
09-29 13:49:30.972 9234 9354 E godot : at Godot.Bridge.CSharpInstanceBridge.Call(IntPtr godotObjectGCHandle, godot_string_name* method, godot_variant** args, Int32 argCount, godot_variant_call_error* refCallError, godot_variant* ret)
09-29 13:49:30.972 9234 9354 E godot : at: void Godot.NativeInterop.ExceptionUtils.LogException(System.Exception) (:0)
```
I expected from one of these, I can use SkiaSharp on Android.
### Steps to reproduce
1. Clone recursively: [GitHub](https://github.com/symful/kaolin.flow-godot-skiasharp)
2. Open the Godot project in the `scene/` folder
3. Export Android
You may change the `TargetFramework` in the folder to `net8.0`, compile it, then run in your phone to test it out for the second case I mentioned above.
### Minimal reproduction project (MRP)
https://github.com/symful/kaolin.flow-godot-skiasharp (not so minimal, but I don't have time to make one) | enhancement,platform:android,topic:dotnet,topic:export | low | Critical |
2,554,847,364 | PowerToys | Key remapping issue | ### Microsoft PowerToys version
0.84.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I had to remap a key a few months ago because it was ghost-typing. Yesterday I got a fresh Windows installation after that I installed Powertoys but the key binding is still working but it's not showing in the keyboard manager in Powertoys to undo it.
### ✔️ Expected Behavior
The old key mapping must be shown so that I can make changes or remove them automatically after uninstallation.
### ❌ Actual Behavior
I cant access the old keymapping to make any changes or completely removing it
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,554,851,115 | pytorch | DTensor does not support `nn.init.eye_` | ### 🐛 Describe the bug
Related:
* https://github.com/pytorch/pytorch/issues/130671
`nn.init.eye_` is not supported on `DTensor`s. I wonder what other [inplace `nn.init` functions](https://pytorch.org/docs/stable/nn.init.html) are not supported?
```python
# Modified from https://github.com/pytorch/pytorch/tree/main/torch/distributed/_tensor
# to run this file (i.e. dtensor_example.py):
# torchrun --standalone --nnodes=1 --nproc-per-node=1 dtensor_example.py
import os
import torch
from torch import nn
from torch.distributed._tensor import init_device_mesh, Shard, distribute_tensor
mesh = init_device_mesh("cuda", (int(os.environ["WORLD_SIZE"]),))
tensor = torch.rand((3, 3))
my_dtensor = distribute_tensor(tensor, mesh, [Shard(dim=0)])
nn.init.eye_(my_dtensor)
```
```
[rank0]: NotImplementedError: Operator aten.eye.m_out does not have a sharding strategy registered.
```
### Versions
```
torch==2.4.0
```
cc @wanchaol @tianyu-l @wz337 @XilunWu @d4l3k | triaged,module: dtensor | low | Critical |
2,554,854,190 | tauri | [feat] Add support for macos touchbar | ### Describe the problem
add support for macos touchbar.
### Describe the solution you'd like
Support touchbar.
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request,platform: macOS | low | Minor |
2,554,856,196 | pytorch | Can not translate Llama model to MLIR | ### 🐛 Describe the bug
I use the following script to translate Llama-2-7b-hf to MLIR. It failed in the translation to torchscript.
```python
from transformers import AutoTokenizer, LlamaForCausalLM
from torch_mlir import fx
import torch
model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
prompt = "What is the capital of France?"
inputs = tokenizer(prompt, return_tensors="pt")
print(f"Input shape: {inputs['input_ids'].shape}")
m = fx.export_and_import(model, torch.randn(1, 8), enable_ir_printing=True,
enable_graph_printing=True)
```
backtrace
```shell
Traceback (most recent call last):
File "/home/hmsjwzb/work/tinyLlama_0927/./lamaExample.py", line 15, in <module>
m = fx.export_and_import(model, torch.randn(1, 8), enable_ir_printing=True,
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch_mlir/fx.py", line 73, in export_and_import
prog = torch.export.export(f, args, kwargs, dynamic_shapes=dynamic_shapes)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/export/__init__.py", line 174, in export
return _export(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/export/_trace.py", line 946, in wrapper
raise e
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/export/_trace.py", line 929, in wrapper
ep = fn(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/export/exported_program.py", line 88, in wrapper
return fn(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/export/_trace.py", line 1455, in _export
aten_export_artifact = export_func(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/export/_trace.py", line 1060, in _strict_export
gm_torch_level = _export_to_torch_ir(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/export/_trace.py", line 512, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1350, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1561, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 421, in _fn
return fn(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1561, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1078, in catch_errors
return callback(frame, cache_entry, hooks, frame_state, skip=1)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 456, in _convert_frame_assert
return _compile(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_utils_internal.py", line 83, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 799, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 232, in time_wrapper
r = func(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 618, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1184, in transform_code_object
transformations(instructions, code_options)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 177, in _fn
return fn(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 564, in transform
tracer.run()
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2450, in run
super().run()
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 892, in run
while self.step():
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 804, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 498, in wrapper
return inner_fn(self, inst)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1511, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 742, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 408, in call_function
return tx.inline_user_function_return(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 748, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2665, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2781, in inline_call_
tracer.run()
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 892, in run
while self.step():
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 804, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 498, in wrapper
return inner_fn(self, inst)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1499, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 742, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 342, in call_function
return super().call_function(tx, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 294, in call_function
return super().call_function(tx, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 91, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 748, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2665, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2781, in inline_call_
tracer.run()
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 892, in run
while self.step():
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 804, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 498, in wrapper
return inner_fn(self, inst)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1458, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 742, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 380, in call_function
return wrap_fx_proxy(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1713, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1798, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1854, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1786, in get_fake_value
ret_val = wrap_fake_exception(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1301, in wrap_fake_exception
return fn()
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1787, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1922, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1909, in run_node
return nnmodule(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1561, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 163, in forward
return F.embedding(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/nn/functional.py", line 2267, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1060, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1449, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1144, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1689, in _dispatch_impl
return decomposition_table[func](*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 265, in _fn
result = fn(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_decomp/decompositions.py", line 1179, in embedding
return weight[indices]
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1060, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1449, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1152, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1729, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_impls.py", line 150, in dispatch_to_op_implementations_dict
return op_implementations_dict[func](fake_mode, func, *args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_subclasses/fake_impls.py", line 549, in index_tensor
out = meta_index_Tensor(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/_meta_registrations.py", line 2988, in meta_index_Tensor
torch._check(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/__init__.py", line 1200, in _check
_check_with(RuntimeError, cond, message)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/__init__.py", line 1183, in _check_with
raise error_type(message_evaluated)
torch._dynamo.exc.TorchRuntimeError: Failed running call_module L__self___model_embed_tokens(*(FakeTensor(..., size=(1, 8)),), **{}):
tensors used as indices must be long, int, byte or bool tensors
from user code:
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1208, in forward
outputs = self.model(
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1561, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hmsjwzb/work/selfPython/ai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 974, in forward
inputs_embeds = self.embed_tokens(input_ids)
```
### Versions
```shell
Collecting environment information...
PyTorch version: 2.4.0.dev20240604+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 18.1.8 (/home/hmsjwzb/code/llvm-project/clang 443e23eed24d9533566f189ef25154263756a36d)
CMake version: version 3.30.3
Libc version: glibc-2.35
Python version: 3.10.15+ (heads/3.10:0c5fc272175, Sep 24 2024, 11:33:24) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.4.0.dev20240604+cpu
[pip3] torchaudio==2.4.1
[pip3] torchvision==0.19.1
[pip3] triton==2.0.0
[conda] magma-cuda121 2.6.1 1 pytorch
```
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,554,879,790 | tauri | [bug] Any idea on how to fix taskbar z-order issue in windows 11? | ### Describe the bug
I implemented my application using Tauri. However, whenever I switch to a new application by clicking on the taskbar, the application window disappears. After some research, I found that this is likely due to a Windows taskbar Z-order issue, as mentioned in this [Tauri GitHub issue](https://github.com/tauri-apps/tauri/issues/7328).
I tried the suggested approach from the GitHub issue using unsafe Win32 code (see the code snippet below), but the issue still persists.
### Reproduction
```
fn main() {
tauri::Builder::default()
// .setup(|app| {
// let window = app.get_window("main").unwrap();
// let handle = HANDLE(null_mut());
// let mut name: Vec<u16> = wchz!("NonRudeHWND").to_vec();
// unsafe {
// SetPropW(GetForegroundWindow(), PCWSTR(name.as_mut_ptr()), handle);
// }
// Ok(())
// })
// .setup(|app| {
// Ok(())
// })
.run(tauri::generate_context!())
.expect("error while running tauri application");
}
```
and set
``` "windows": [
{
"decorations": false,
"skipTaskbar": true,
"alwaysOnTop": true,
"resizable": false,
"focus": true
}
```
and in js
```
async function setupWin() {
try {
const monitor = await currentMonitor();
if(monitor) {
screenWidth = monitor.size.width;
screenHeight = monitor.size.height;
const windowWidth = 300;
const windowHeight = 50;
xoffset = 0;
yoffset = 0;
const windowX = (screenWidth - windowWidth*3.5 + xoffset);
const windowY = (screenHeight - windowHeight*5 + yoffset);
await appWindow.setSize(new LogicalSize(windowWidth, windowHeight));
await appWindow.setPosition(new LogicalPosition(windowX, windowY));
}
} catch (error) {
console.error("Error setting up window:", error);
}
}
```
so that i can use this as a widget for personal use on desktop taskbar
### Expected behavior
The Tauri window gets hidden when switching to another application from the taskbar, likely due to a taskbar Z-order issue.
I attempted the code snippet provided in the GitHub issue but am not experienced with Win32 unsafe code. Despite implementing the code, the issue persists.
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 X64
✔ WebView2: 129.0.2792.65
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.83.0-nightly (ed04567ba 2024-09-28)
✔ cargo: 1.83.0-nightly (80d82ca22 2024-09-27)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: nightly-x86_64-pc-windows-msvc (default)
- node: 20.16.0
- npm: 10.8.3
[-] Packages
- tauri [RUST]: 1.8.0
- tauri-build [RUST]: 1.5.5
- wry [RUST]: 0.24.11
- tao [RUST]: 0.16.10
- tauri-cli [RUST]: 1.6.2
- @tauri-apps/api [NPM]: 1.6.0
- @tauri-apps/cli [NPM]: 1.6.2
[-] App
- build-type: bundle
- CSP: unset
- distDir: ../build
- devPath: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
If there’s any further guidance on how to handle the taskbar Z-order issue, or a correct implementation of Win32 calls in Tauri, it would be much appreciated!
Let me know if you need more details or if I can provide further context. | type: bug,platform: Windows,status: needs triage | low | Critical |
2,554,883,235 | ui | [bug]: Error "useTheme must be used within a ThemeProvider" doesn't throw at all. | ### Describe the bug
If we call `useTheme` out side of the `ThemeProvider`, it should throw an error saying "useTheme must be used within a ThemeProvider".
### Affected component/components
Root of the application
### How to reproduce
1. Apply darkmode as instruct in the docs
2. Call `useTheme()` outside of the `ThemeProvider`
Expected result: Is should throw an error.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,554,886,302 | material-ui | [core] True ESM support | ### Steps to reproduce
I'm creating this issue as an umbrella to solve ESM support in Material UI v7.
What does success looks like?
- We close those issues
- [x] #35233
- [ ] #30671
- [ ] #37335
- [ ] #30525
- [ ] #35773
- [ ] #26254
- [ ] #44055
- [ ] #43980
- [ ] #44265
- [ ] #44180
- [ ] #45018
- [ ] https://github.com/dai-shi/waku/issues/428
- [ ] https://github.com/vitejs/vite/issues/12423
- Likely same root cause
- [ ] #43433
- [ ] #43242
- https://github.com/wooorm/npm-esm-vs-cjs/blob/c0a92334da4979f7614143734bbe7931d2a0dcde/data/2024-08-28.json#L2626 is no longer flagged as "faux" but "dual", see their legend description.
### Current behavior
_No response_
### Expected behavior
_No response_
### Context
_No response_
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: - | umbrella,breaking change,scope: code-infra | low | Major |
2,554,887,803 | flutter | [google_maps_flutter] zIndex is silently truncated on iOS | ### What package does this bug report belong to?
google_maps_flutter
### What target platforms are you seeing this bug on?
iOS
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
[Paste file content here]
```
</details>
### Steps to reproduce
1. Copy paste the code and open it
### Expected results
The marker with the higher zIndex should be drawn above the other
### Actual results
The marker thats second in the array is drawn above
### Code sample
<details open><summary>Code sample</summary>
```dart
class MapZIndexExample extends StatelessWidget {
const MapZIndexExample({super.key});
@override
Widget build(BuildContext context) {
return GoogleMap(
compassEnabled: false,
initialCameraPosition: const CameraPosition(
target: LatLng(37.334542, -122.009325),
zoom: 12,
),
myLocationButtonEnabled: false,
zoomControlsEnabled: false,
minMaxZoomPreference: const MinMaxZoomPreference(3, 20),
markers: {
const Marker(
markerId: MarkerId('1'),
position: LatLng(37.334542, -122.014325),
zIndex: 3.7,
),
const Marker(
markerId: MarkerId('2'),
position: LatLng(37.334542, -122.009325),
zIndex: 3.5,
),
},
);
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
<img src="https://github.com/user-attachments/assets/ee8565d2-7b84-404e-87bc-96fed37f5db4" height="600">
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| p: maps,package,team-ecosystem,P2,triaged-ecosystem | low | Critical |
2,554,888,953 | pytorch | Incomplete Parameter Gathering on Rank 0 with FSDP Model Saving | ### 🐛 Describe the bug
When using FSDP (Fully Sharded Data Parallel) to save a model, some parameters are not fully gathered on rank 0 and therefore not properly saved. This issue occurs specifically with the skip_connection_block component of the model.
I'm saving each module of the model using the following code:
```python
def save_model(model, model_save_path, epoch, optimizer, scaler, label_mode, rank):
"""
Save the model, optimizer, and other components' state dictionaries for FSDP.
"""
if rank == 0:
print(f"Saving the model at epoch {epoch} on rank {rank}")
# Create save directories
for component in ['encoder', 'decoder', 'skip_connection_block', 'optimizer', 'scaler']:
os.makedirs(os.path.join(model_save_path, component), exist_ok=True)
if label_mode == 'branch':
os.makedirs(os.path.join(model_save_path, 'prompt_encoder'), exist_ok=True)
print("Directories created")
# Configure FSDP state dict settings
full_state_dict_config = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
# Save model components
with FSDP.state_dict_type(model, StateDictType.FULL_STATE_DICT, full_state_dict_config):
model_state_dict = model.state_dict()
encoder_state_dict = model.encoder.state_dict()
decoder_state_dict = model.decoder.state_dict()
skip_connection_block_state_dict = model.skip_connection_block.state_dict()
if rank == 0:
# debug print
for name, param in skip_connection_block_state_dict.items():
if name == 'up_sampling_block_8times.up_sample.1.weight' or name == 'up_sampling_block_4times.up_sample.1.weight':
print(f'gpu {rank} - param {name} - {param} - size {param.size()}')
torch.save(encoder_state_dict, os.path.join(model_save_path, 'encoder', f"encoder_{epoch}.pth"))
torch.save(decoder_state_dict, os.path.join(model_save_path, 'decoder', f"decoder_{epoch}.pth"))
torch.save(skip_connection_block_state_dict, os.path.join(model_save_path, 'skip_connection_block', f"skip_connection_block_{epoch}.pth"))
if label_mode == 'branch':
prompt_encoder_state_dict = model.prompt_encoder.state_dict()
torch.save(prompt_encoder_state_dict, os.path.join(model_save_path, 'prompt_encoder', f"prompt_encoder_{epoch}.pth"))
print("Model components saved")
# Save optimizer state
full_osd = FSDP.optim_state_dict(model, optimizer)
scaler_state_dict = scaler.state_dict()
if rank == 0:
torch.save(full_osd, os.path.join(model_save_path, 'optimizer', f"optimizer_{epoch}.pth"))
print("Optimizer saved")
torch.save(scaler_state_dict, os.path.join(model_save_path, 'scaler', f"scaler_{epoch}.pth"))
print("Scaler saved")
print("Model saving completed")
```
And load them using this module:
```python
def load_model(model, checkpoint):
"""
Load the model, optimizer, and other components' state dictionaries for FSDP.
"""
state_dict = checkpoint
model_dict = model.state_dict()
mismatched_keys = []
for k, v in state_dict.items():
if k in model_dict:
if v.shape != model_dict[k].shape:
print(f"Ignoring '{k}' due to shape mismatch. "
f"Checkpoint shape: {v.shape}, Model shape: {model_dict[k].shape}")
mismatched_keys.append(k)
else:
model_dict[k] = v
else:
print(f"Ignoring '{k}' as it's not in the model.")
mismatched_keys.append(k)
model.load_state_dict(model_dict, strict=False)
return model
encoder = load_model(encoder, torch.load(encoder_load_path, map_location='cpu', weights_only=True))
decoder = load_model(decoder, torch.load(decoder_load_path, map_location='cpu', weights_only=True))
skip_connection_block = load_model(skip_connection_block, torch.load(skip_connection_block_load_path, map_location='cpu', weights_only=True))
```
### Expected Behavior
All parameters of the skip_connection_block should be fully gathered on rank 0 and saved correctly.
### Actual Behavior
Some parameters, particularly those in the up_sampling_block_8times and up_sampling_block_4times, are not fully gathered on rank 0, resulting in incomplete or incorrect saving of the model state.
### Loading log
```console
Ignoring 'up_sampling_block_8times.up_sample.1.weight' due to shape mismatch. Checkpoint shape: torch.Size([12]), Model shape: torch.Size([32])
Ignoring 'up_sampling_block_8times.up_sample.1.bias' due to shape mismatch. Checkpoint shape: torch.Size([0]), Model shape: torch.Size([32])
Ignoring 'up_sampling_block_16times.up_sample.1.weight' due to shape mismatch. Checkpoint shape: torch.Size([0]), Model shape: torch.Size([16])
Ignoring 'up_sampling_block_16times.up_sample.1.bias' due to shape mismatch. Checkpoint shape: torch.Size([0]), Model shape: torch.Size([16])
```
### skip_connection_block module
```python
class UpSamplingBlock(nn.Module):
def __init__(self, in_channels, out_channels, activation=nn.ReLU(), dropout_prob=0.1, upsample_factor=2):
super().__init__()
if upsample_factor%2 != 0:
raise ValueError('Upsample factor should be a multiple of 2')
self.activation = activation
self.dropout_prob = dropout_prob
self.up_sample = self.make_up_sample_block(in_channels,
out_channels,
kernel_size=(upsample_factor,upsample_factor,upsample_factor),
stride=(upsample_factor,upsample_factor,upsample_factor),
padding=(0,0,0))
def make_up_sample_block(self, no_channels, out_channels, kernel_size=(2,2,2), stride=(2,2,2), padding=(0,0,0)):
return nn.Sequential(
nn.ConvTranspose3d(no_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding),
nn.BatchNorm3d(out_channels),
self.activation,
nn.Dropout3d(self.dropout_prob),
)
def forward(self, x):
return self.up_sample(x)
class SkipConnectionBlock(nn.Module):
def __init__(self, input_img_size=(128, 128, 16), num_patches=512, linear_proj_dim=128, up_sampling_dim=64):
super().__init__()
assert up_sampling_dim %2 == 0, 'Upsampling dimension should be a multiple of 2'
assert up_sampling_dim>=32, 'Upsampling dimension should be greater than 32'
self.num_patches = num_patches
self.linear_proj_dim = linear_proj_dim
self.up_sampling_dim = up_sampling_dim
if self.num_patches != self.linear_proj_dim:
self.linear1 = nn.Linear(num_patches, self.linear_proj_dim)
self.linear2 = nn.Linear(num_patches, self.linear_proj_dim)
self.linear3 = nn.Linear(num_patches, self.linear_proj_dim)
self.linear4 = nn.Linear(num_patches, self.linear_proj_dim)
self.up_sampling_block_2times = UpSamplingBlock(self.linear_proj_dim, self.up_sampling_dim, upsample_factor=2)
self.up_sampling_block_4times = UpSamplingBlock(self.linear_proj_dim, int(self.up_sampling_dim/2), upsample_factor=4)
self.up_sampling_block_8times = UpSamplingBlock(self.linear_proj_dim, int(self.up_sampling_dim/4), upsample_factor=8)
self.up_sampling_block_16times = UpSamplingBlock(self.linear_proj_dim, int(self.up_sampling_dim/8), upsample_factor=16)
self.input_img_size = input_img_size
```
I have difficulties debugging the underlying cause of this issue, as it's only happening for two of the parameters of a submodule while all the other parameters are saved/loaded successfully.
### Versions
--2024-09-16 01:49:53-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 23357 (23K) [text/plain]
Saving to: ??collect_env.py??
collect_env.py 100%[=====================================================>] 22.81K --.-KB/s in 0s
2024-09-16 01:49:53 (243 MB/s) - ??collect_env.py?? saved [23357/23357]
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.36
Python version: 3.12.5 (main, Sep 3 2024, 10:35:39) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.1.0-25-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166,168,170,172,174,176,178,180,182,184,186,188,190,192,194,196,198,200,202,204,206,208,210,212,214,216,218,220,222
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127,129,131,133,135,137,139,141,143,145,147,149,151,153,155,157,159,161,163,165,167,169,171,173,175,177,179,181,183,185,187,189,191,193,195,197,199,201,203,205,207,209,211,213,215,217,219,221,223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1
[pip3] torchio==0.19.9
[pip3] torchvision==0.19.1
[pip3] triton==3.0.0
[conda] Could not collect
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn | oncall: distributed,module: fsdp,module: distributed_checkpoint | low | Critical |
2,554,898,404 | deno | deno fmt: Indentation of script tag in component files (svelte/vue) | As of Deno deno 2.0.0-rc.6, `deno fmt --unstable-component` indents <script> tag in component files:
```diff
<script lang="ts">
-import { Button } from "$lib/components/ui/button"
+ import { Button } from "$lib/components/ui/button"
```
It is quite common in Vue and Svelte ecosystem not to indent <script> and <style> tags in a componenet due to additional nesting. This is what [dprint does](https://dprint.dev/plugins/markup_fmt/config/) by default and makes it configurable per filetype and tag.
I'd like to propose to follow dprint defaults and, if it zen of Deno allows, expose option to configure it. | suggestion,deno fmt | low | Minor |
2,554,915,259 | flutter | [Google_maps_flutter][iOS]: After clustering update, location updates lost their animation | ### What package does this bug report belong to?
google_maps_flutter
### What target platforms are you seeing this bug on?
iOS
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
[Paste file content here]
```
</details>
### Steps to reproduce
1. Checkout flutter/packages repo.
2. Build the example app at `packages/google_maps_flutter/google_maps_flutter_ios/example`
3. Go to the place marker section
4. Add a marker
5. Tap on a marker
6. Tap on "Change position"
---
Run the same app from the after checking out pre clustering [google_maps_flutter_ios-v2.11.0](https://github.com/flutter/packages/tree/google_maps_flutter_ios-v2.11.0)
### Expected results
- The marker when position changes should animated from its old position to new position
### Actual results
- With the older tag, it expects as behaved
- At current main, the position change animation no longer works
### Code sample
<details open><summary>Code sample</summary>
https://github.com/flutter/packages/tree/main/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
Old Tag -
<img src="https://github.com/user-attachments/assets/1278f876-e65b-44fd-8eca-0968c05d43d1" height="600">
Current Main -
<img src="https://github.com/user-attachments/assets/affcd731-22b2-4f6f-a424-b3b2160a588a" height="600">
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| c: regression,platform-ios,p: maps,package,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.26 | low | Critical |
2,554,917,594 | godot | Minimum value `0.0001` in `hintString` for `Export` property with `PropertyHint.Range` shown as `0` in Editor | ### Tested versions
- Reproducible in: `v4.3.stable.mono.arch_linux`
### System information
Godot v4.3.stable.mono unknown - Garuda Linux #1 SMP PREEMPT_DYNAMIC Fri, 20 Sep 2024 09:23:13 +0000 - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 6950 XT (RADV NAVI21) - AMD Ryzen 7 3800X 8-Core Processor (16 Threads)
### Issue description
`Export` property in C# Resource class with `hintString = "0.0001,1,or_greater"` and `PropertyHint.Range`, when setup to the minimum allowed value (`0.0001`), displayed in Editor as `0`, while still displaying correct value when hovering on the field.

When the minimum value in `hintString` is set to `0.001` - now it displayed correctly in the Editor.
### Steps to reproduce
1. Create a custom Resource class in C#.
2. Add code `[Export(PropertyHint.Range, "0.0001,1,or_greater")] public float MyValue { get; private set; }`.
3. Compile code.
4. Create a new Resource asset and assign a newly created custom Resource class as a script to it.
### Minimal reproduction project (MRP)
[firearms.zip](https://github.com/user-attachments/files/17179285/firearms.zip)
| discussion,topic:editor,usability | low | Minor |
2,554,926,351 | ui | [bug]: astro `TooltipTrigger` must be used within `Tooltip` | ### Describe the bug
When I used shadcn's tooltip component in the astro project, I got an error of `TooltipTrigger` must be used within `Tooltip`
### Affected component/components
tooltip
### How to reproduce
1. see stackblitz code
### Codesandbox/StackBlitz link
https://stackblitz.com/edit/withastro-astro-rkam7x?file=src%2Fpages%2Findex.astro
### Logs
_No response_
### System Info
```bash
mac chrome astro
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,554,931,762 | rust | `tests/ui/unpretty/staged-api-invalid-path-108697.rs` may fail on non English Windows installs | Ran this command on Windows 11:
```sh
./x.py t --target x86_64-pc-windows-gnullvm --host x86_64-pc-windows-gnullvm tests/ui/unpretty/staged-api-invalid-path-108697.rs
```
This exact invocation requires prebuilt host gnullvm toolchain, but I suppose other Windows targets are also affected, although I cannot test them.
Expected test to pass, instead it failed with:
```
running 1 tests
[ui] tests\ui\unpretty\staged-api-invalid-path-108697.rs ... F
failures:
---- [ui] tests\ui\unpretty\staged-api-invalid-path-108697.rs stdout ----
$DIR\lol
$DIR\staged-api-invalid-path-108697.rs
diff of stderr:
- error: couldn't read $DIR/lol: No such file or directory (os error 2)
+ error: couldn't read $DIR/lol: Nie można odnaleźć określonego pliku. (os error 2)
2 --> $DIR/staged-api-invalid-path-108697.rs:8:1
3 |
4 LL | mod foo;
The actual stderr differed from the expected stderr.
Actual stderr saved to H:\projects\rust\build\x86_64-pc-windows-gnullvm\test\ui\unpretty\staged-api-invalid-path-108697\staged-api-invalid-path-108697.stderr
To update references, rerun the tests and pass the `--bless` flag
To only update this specific test, also pass `--test-args unpretty\staged-api-invalid-path-108697.rs`
error: 1 errors occurred comparing output.
status: exit code: 1
command: PATH="H:\projects\rust\build\x86_64-pc-windows-gnullvm\stage1\bin;H:\projects\rust\build\x86_64-pc-windows-gnullvm\stage0-bootstrap-tools\x86_64-pc-windows-gnullvm\release\deps;H:\rust\bin;C:\Users\mateusz\bin;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\local\bin;C:\Program Files\Git\usr\bin;C:\Program Files\Git\usr\bin;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\bin;C:\Users\mateusz\bin;C:\Program Files\Alacritty;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\libnvvp;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\System32\OpenSSH;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\dotnet;C:\Program Files\NVIDIA Corporation\Nsight Compute 2023.1.1;C:\Program Files\WezTerm;C:\Program Files\NVIDIA Corporation\NVIDIA App\NvDLISR;C:\Program Files\Git\cmd;C:\Program Files\Process Lasso;C:\Users\mateusz\.cargo\bin;C:\Users\mateusz\AppData\Local\Microsoft\WindowsApps;C:\Users\mateusz\AppData\Local\JetBrains\Toolbox\scripts;C:\Users\mateusz\AppData\Local\GitHubDesktop\bin;C:\Users\mateusz\AppData\Local\Programs\Microsoft VS Code\bin;H:\msys64\clang64\bin;C:\Users\mateusz\AppData\Local\Microsoft\WinGet\Links;C:\Program Files\Git\usr\bin\vendor_perl;C:\Program Files\Git\usr\bin\core_perl" "H:\\projects\\rust\\build\\x86_64-pc-windows-gnullvm\\stage1\\bin\\rustc.exe" "H:\\projects\\rust\\tests\\ui\\unpretty\\staged-api-invalid-path-108697.rs" "-Zthreads=1" "-Zsimulate-remapped-rust-src-base=/rustc/FAKE_PREFIX" "-Ztranslate-remapped-path-to-local-path=no" "-Z" "ignore-directory-in-diagnostics-source-blocks=C:\\Users\\mateusz\\.cargo" "-Z" "ignore-directory-in-diagnostics-source-blocks=H:\\projects\\rust\\vendor" "--sysroot" "H:\\projects\\rust\\build\\x86_64-pc-windows-gnullvm\\stage1" "--target=x86_64-pc-windows-gnullvm" "--check-cfg" "cfg(FALSE)" "--error-format" "json" "--json" "future-incompat" "-Ccodegen-units=1" "-Zui-testing" "-Zdeduplicate-diagnostics=no" "-Zwrite-long-types-to-disk=no" "-Cstrip=debuginfo" "--emit" "metadata" "-C" "prefer-dynamic" "--out-dir" "H:\\projects\\rust\\build\\x86_64-pc-windows-gnullvm\\test\\ui\\unpretty\\staged-api-invalid-path-108697" "-A" "unused" "-A" "internal_features" "-Crpath" "-Cdebuginfo=0" "-Lnative=H:\\projects\\rust\\build\\x86_64-pc-windows-gnullvm\\native\\rust-test-helpers" "-L" "H:\\projects\\rust\\build\\x86_64-pc-windows-gnullvm\\test\\ui\\unpretty\\staged-api-invalid-path-108697\\auxiliary" "-Zunpretty=mir"
stdout: none
--- stderr -------------------------------
error: couldn't read H:\projects\rust\tests\ui\unpretty\lol: Nie można odnaleźć określonego pliku. (os error 2)
--> H:\projects\rust\tests\ui\unpretty\staged-api-invalid-path-108697.rs:8:1
|
LL | mod foo;
| ^^^^^^^^
error: aborting due to 1 previous error
------------------------------------------
failures:
[ui] tests\ui\unpretty\staged-api-invalid-path-108697.rs
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 17699 filtered out; finished in 36.93ms
Some tests failed in compiletest suite=ui mode=ui host=x86_64-pc-windows-gnullvm target=x86_64-pc-windows-gnullvm
Build completed unsuccessfully in 0:00:08
```
The error `Nie można odnaleźć określonego pliku.` resembles expected English `The system cannot find the file specified.` (the test would normalize that to `No such file or directory`), but it's displayed with the OS language (Polish). | A-testsuite,O-windows,T-compiler,C-bug | low | Critical |
2,554,936,437 | angular | Nested afterRenderEffect() should run at least once | ### Which @angular/* package(s) are relevant/related to the feature request?
platform-browser-dynamic
### Description
Hello!
First, it's a feature request, not a bug report.
If you think it should keep working as is - that's fine.
I'm creating it because I found this case after an hour of investigating why `afterRenderEffect()` does not work.
The execution path in the real code was not as simple as in the example I provided.
So, in our code, we have to load Stripe and mount one of its elements to an HTML element. If we do it during the initial rendering - everything works fine. When we tried to do it conditionally (when the user selects another payment method), `afterRenderEffect()` was not executed until something else had been changed on the page.
I found that it is because the `mount()` method was called inside `untracked()`, and, technically, `afterRenderEffect()` was called outside of a reactive context, so there was no error message. But in fact, `afterRenderEffect()` would not run.
https://stackblitz.com/edit/stackblitz-starters-yhxufr?file=src%2Fmain.ts
### Proposed solution
Please run nested `afterRenderEffect()` at least once.
### Alternatives considered
The fix was easy (although a little bit dirty), I just replaced `untracked()` usage with `setTimeout()`, but please consider running nested `afterRenderEffect()` at least once without that workaround. | area: core,core: reactivity | low | Critical |
2,554,989,870 | pytorch | [inductor] grid_sampler_2d is slower than eager in operatorbench | Repro with
```
$ python benchmarks/dynamo/microbenchmarks/operatorbench.py --op aten.grid_sampler_2d.default
aten.grid_sampler_2d.default: 100%|████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.44s/it]
aten.grid_sampler_2d.default: inductor=0.5073x (0.5073-0.5073) took 3s
```
`--inductor-config autotune` helps a bit, but it is still slower.
This op gets decomposed here:
https://github.com/pytorch/pytorch/blob/c9653bf2ca6dd88b991d71abf836bd9a7a1d9dc3/torch/_decomp/decompositions.py#L4176-L4358
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @vfdev-5 @fdrocha @lezcano
Related #104296 | triaged,oncall: pt2,module: inductor,internal ramp-up task | low | Major |
2,554,996,962 | kubernetes | Sync.Once in client-go metrics play badly with components that want to provide them by default | In Controller-Runtime, we register our metrics provider for the client-go [leaderelection](https://github.com/kubernetes-sigs/controller-runtime/blob/4381fa0aeee43e331be14b0d70cd276e1e91ad7a/pkg/metrics/leaderelection.go#L26), [workqueue](https://github.com/kubernetes-sigs/controller-runtime/blob/4381fa0aeee43e331be14b0d70cd276e1e91ad7a/pkg/metrics/workqueue.go#L99) and [clientmetrics](https://github.com/kubernetes-sigs/controller-runtime/blob/4381fa0aeee43e331be14b0d70cd276e1e91ad7a/pkg/metrics/client_go_adapter.go#L43-L54).
This is because controller-runtime provides a metrics endpoint and we want it to by default have all the metrics relevant to controllers.
Unfortunately, all this register funcs are internally a `sync.Once`. This means that if someone wants to register their own metrics, they have to do before someone else does. As of today, controller-runtime does this in an `init`, but even if it did that later, users would have to register this before controller-runtime does and make sure that no other dep registers it first, otherwise it will not work.
IMHO, we should remove the `sync.Once` so that as many metrics provides as wanted can be registered and this doesn't become a "first one wins, rest gets nothing" kind of situation where anyone who wants their custom adapter has to be super careful to be the first to register it to avoid it silently breaking.
There was some prior discussion around this specifically in the context of workqueue metrics, where its now possible to set a per-workqueue metrics provider that takes precedence over the global one as workaround:
* https://github.com/kubernetes/kubernetes/pull/114242
In that context, a PR to allow overriding the global one was rejetected, but i think we should be doing this: https://github.com/kubernetes/kubernetes/pull/116616
There was also some Slack discussion around this: https://kubernetes.slack.com/archives/C0EG7JC6T/p1719851021075269
This originally got reported as an issue in controller-runtime: https://github.com/kubernetes-sigs/controller-runtime/issues/2957
/sig api-machinery
/kind bug | kind/bug,sig/api-machinery,sig/instrumentation,triage/accepted | medium | Critical |
2,555,000,166 | pytorch | Setting a wrong value to `bias` argument of `nn.RNN()` gets an indirect error message | ### 🐛 Describe the bug
Setting the wrong value `10` to `bias` argument of [nn.RNN()](https://pytorch.org/docs/stable/generated/torch.nn.RNN.html) gets the indirect error message as shown below. *It also happens with `batch_first` and `bidirectional` argument of `nn.RNN()`:
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]])
torch.manual_seed(42)
# ↓↓
rnn = nn.RNN(input_size=3, hidden_size=2, bias=10)
rnn(input=my_tensor) # Error
```
```
TypeError: rnn_tanh() received an invalid combination of arguments - got (Tensor, Tensor, list, int, int, float, bool, bool, bool), but expected one of:
* (Tensor data, Tensor batch_sizes, Tensor hx, tuple of Tensors params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional)
didn't match because some of the arguments have invalid types: (Tensor, Tensor, !list of [Parameter, Parameter, Parameter, Parameter]!, !int!, !int!, !float!, !bool!, bool, bool)
* (Tensor input, Tensor hx, tuple of Tensors params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first)
didn't match because some of the arguments have invalid types: (Tensor, Tensor, !list of [Parameter, Parameter, Parameter, Parameter]!, !int!, int, float, bool, bool, bool)
```
So, it should directly say something like below:
> `bias` argument must be `bool`
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: rnn,triaged | low | Critical |
2,555,002,260 | godot | Converted `CPUParticles3D` to `GPUParticles3D` weird particle creation behavior | ### Tested versions
4.4-dev2
### System information
Godot v4.4.dev2 - Windows 10.0.19045 - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 4050 Laptop GPU (NVIDIA; 32.0.15.6070) - 13th Gen Intel(R) Core(TM) i5-13500HX (20 Threads)
### Issue description
Here's a scene falling with beautiful "snow" using CPUParticles3D

Here's what happens when the particles are converted to GPU particles.
https://github.com/user-attachments/assets/ae589387-3adf-47e8-ba8b-ebdfa230d872
I'm not sure how to exactly describe it in words but when converting, it doesn't create the amount of particles it should on the default speed scale, and only actually creates the particles at a higher speed scale, and then resetting the speed scale to 1 the created particles fall as they should.
### Steps to reproduce
Create a CPUParticles3D such as shown above, convert to GPUParticles3D, then see it doesn't create as many as it shou;d
### Minimal reproduction project (MRP)
[New Compressed (zipped) Folder.zip](https://github.com/user-attachments/files/17179862/New.Compressed.zipped.Folder.zip)
| bug,topic:3d,topic:particles | low | Major |
2,555,006,529 | rust | Strengthen the follow-set rule for macros | Over in:
- https://github.com/rust-lang/rust/pull/130635
@compiler-errors describes this general problem:
> The breakage specifically represents an inherent limitation to the "macro follow-set" formulation which is _supposed_ to make us more resilient against breakages due to extensions to the grammar like this.
>
> Given two macro matcher arms:
>
> * `($ty:ty) => ...`
> * `(($tt:tt)*) => ...`
>
> And given tokens like:
>
> * `&` `pin` `mut` [...more tokens may follow...]
>
> On nightly today, `&pin` gets parsed as a type. However, we run out of matchers but still have tokens left (the `mut` token is next), so we fall through to the next arm. Since it's written like `($tt:tt)*`, everything is allowed, and we match the second arm successfully...
>
> I think that's weird, because if this second arm were written like `$ty:ty mut`, that would be illegal, since `mut` is not in the follow-set of the `:ty` matcher. Thus, we can use `:tt` matchers to observe whether the compiler _actually_ parses things not in our grammar that should otherwise be protected against, which seems pretty gross.
And @Noratrieb proposes a general solution:
> I believe a solution to this would be the following new logic:
>
> * after the end of a macro matcher arm has been reached
> * and there are still input tokens remaining
> * and if the last part of the matcher is a metavar
> * ensure that the first remaining token is in the follow set of this metavar
> * if it is, move on to the next arm
> * if it is not, **emit an error**
>
> What this semantically does is strengthen the "commit to fully matching metavars or error" behavior such that it extends past the end. I don't know how many macros rely on this, but it seems like emitting an FCW (instead of error) on such macro invocations would find all these cases and ensure that the follow-set logic is actually robust past the end. But imo this shouldn't block this PR (which should probably just ship as-is) and can be done separately.
This issue is to track the proposal for this FCW.
cc @Noratrieb @compiler-errors @eholk @rust-lang/lang
| T-lang,C-discussion,I-lang-radar | low | Critical |
2,555,006,557 | pytorch | Setting a wrong value to `dropout` argument of `nn.RNN()` gets an indirect error message | ### 🐛 Describe the bug
Setting the wrong value `0.+0.j` to `dropout` argument of [nn.RNN()](https://pytorch.org/docs/stable/generated/torch.nn.RNN.html) gets the indirect error message as shown below. *It also happens with `dropout` argument of [nn.LSTM()](https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html) and [nn.GRU()](https://pytorch.org/docs/stable/generated/torch.nn.GRU.html):
```python
import torch
from torch import nn
my_tensor = torch.tensor([[8., -3., 5.]])
torch.manual_seed(42)
# ↓↓↓↓↓↓
rnn = nn.RNN(input_size=3, hidden_size=2, dropout=0.+0.j) # Error
```
> TypeError: float() argument must be a string or a real number, not 'complex'
So, it should directly say something like below:
> `dropout` must be `int` or `float` between 0 <= x <= 1
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: rnn,triaged | low | Critical |
2,555,010,973 | next.js | next package can't be found when using --turbo and deno 2 rc in a monorepo | ### Link to the code that reproduces this issue
https://github.com/hamlim/deno-monorepo
### To Reproduce
1. Clone the repo
2. Upgrade to deno 2 release candidate: `deno upgrade rc` (using at least `2.0.0-rc.7`)
3. Run `deno install` to install dependencies
4. Run `deno task dev --filter=docs` (runs the docs Next app in development mode)
5. Try to visit `localhost:3000`
6. See error in terminal
### Current vs. Expected behavior
Following the above steps - I'd expect the app to boot correctly, however it instead shows the following error in the terminal:
```sh
docs:dev: $ next dev --turbo
docs:dev: ▲ Next.js 15.0.0-canary.140 (turbo)
docs:dev: - Local: http://localhost:3000
docs:dev:
docs:dev: ✓ Starting...
docs:dev: [Error: Next.js package not found
docs:dev:
docs:dev: Debug info:
docs:dev: - Execution of get_entrypoints_with_issues failed
docs:dev: - Execution of Project::entrypoints failed
docs:dev: - Execution of PagesProject::to_endpoint failed
docs:dev: - Execution of PagesStructureItem::new failed
docs:dev: - Execution of FileSystemPath::join failed
docs:dev: - Execution of get_next_package failed
docs:dev: - Next.js package not found] {
docs:dev: code: 'GenericFailure'
docs:dev: }
```
Note:
Removing `--turbo` on the dev task within `apps/docs/package.json` makes it work as expected.
My assumption is that turbopack (maybe) is unable to resolve the symlinked `next` package from the root `node_modules` (next is installed somewhere else with `deno` and then symlinked _I think_)
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: 24.0.0
Available memory (MB): 24576
Available CPU cores: 8
Binaries:
Node: 20.11.1
npm: 10.8.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.140 // There is a newer canary version (15.0.0-canary.173) available, please upgrade!
eslint-config-next: N/A
react: 19.0.0-rc-7771d3a7-20240827
react-dom: 19.0.0-rc-7771d3a7-20240827
typescript: 5.4.5
Next.js Config:
output: N/A
⚠ There is a newer canary version (15.0.0-canary.173) available, please upgrade!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Module Resolution, Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
Will attempt to replicate with the latest canary! | bug,Turbopack,linear: turbopack,Module Resolution | low | Critical |
2,555,011,828 | PowerToys | Close all windows | ### Description of the new feature / enhancement
Sometimes you just want to have a fresh start. Things clutter up and you want to just close all windows and have a fresh screen to work with. This feature allows users to close all windows with ease.
### Scenario when this would be used?
Power users tend to have bunch of windows open but sometimes, like i said, it's just too much and it feels claustrophobic. Having a button to just have a fresh start will definitely help everyone
### Supporting information
This could be a tray icon that you can click and select "Close all windows" or "Force close all windows" (Much like PowerToys Awake) | Needs-Triage | low | Minor |
2,555,021,181 | TypeScript | Find All References fails for CommonJS named exports in checkJs Mode | ### 🔎 Search Terms
"export", "references", "commonjs", "checkJs", "allowJs",
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _________
This bugs reproduces in typescript@3,4,5
### ⏯ Playground Link
_No response_
### 💻 Code
Request find all references for the symbol `f` in `f.js`
```js
// f.js:
function f() {}
module.exports = {
f,
};
// main.js
const { f } = require("./f");
f();
```
Reproduction repo: https://github.com/golopot/testcommonjs
### 🙁 Actual behavior
Doesn't find the reference from the dependent `./main.js`.
### 🙂 Expected behavior
Do find the reference from the dependent `./main.js`.
### Additional information about the issue
```json
{
"compilerOptions": {
"target": "es2016",
"module": "commonjs",
"allowJs": true,
"checkJs": true,
"noEmit": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": false,
"skipLibCheck": true
}
}
``` | Bug,Help Wanted | low | Critical |
2,555,031,960 | vscode | Ability to toggle panel | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
I am working on an extension and would love the ability to focus on my panel as well as "unfocus" my panel. I can use the VSCode provided `<panel name>.focus` command to open and focus on my extensions panel, but I can't then close that panel.
The use case: I'd like to provide a command that, when run, will toggle the panel. This is useful when the UI is helpful for short interactions.
A possible implementation of this command would follow the steps of:
- Is the panel currently focused?
- No:
- Open the appropriate sidebar, if necessary, and focus on the panel
- Yes:
- Is there a previous panel in the history stack?
- Yes: Focus on that panel
- No: Close the sidebar OR go to a default panel
This feature request could be achieved in two ways:
1. Provide lower-level APIs, allowing developers to implement the above flow. The missing APIs, from what I can tell, are:
- An ability to tell which sidebar a panel is in
- An ability to determine the history of panels in a sidebar (i.e. identify which panel to focus on)
2. Offer a higher-level command similar to `<panel name>.focus` (for example, a `<panel name>.toggle` command)
| feature-request,api,layout | medium | Major |
2,555,043,128 | rust | rustdoc: methods on type aliases do not properly link to themselves | example: https://docs.rs/oauth2/5.0.0-alpha.1/oauth2/basic/type.BasicClient.html#method.set_token_uri
broken html:
```html
<section id="method.set_token_uri" class="method"><a class="src rightside" href="../../src/oauth2/client.rs.html#348-376">source</a><h4 class="code-header">pub fn <a href="../../oauth2/struct.Client.html#tymethod.set_token_uri" class="fn">set_token_uri</a>(
self,
token_url: <a class="struct" href="../../oauth2/struct.TokenUrl.html" title="struct oauth2::TokenUrl">TokenUrl</a>
) -> <a class="struct" href="../../oauth2/struct.Client.html" title="struct oauth2::Client">Client</a><TE, TR, TT, TIR, RT, TRE, HAS_AUTH_URL, HAS_DEVICE_AUTH_URL, HAS_INTROSPECTION_URL, HAS_REVOCATION_URL, true></h4></section>
```
note that the id is `method.set_token_uri` but the href uses `tymethod` (and also links to the page for the base type, not the page for the alias). | T-rustdoc,C-bug,A-rustdoc-ui | low | Critical |
2,555,046,163 | realworld | [Bug]: 404 page | ### Relevant scope
Frontend specs
### Description
Getting page 404 not found for this URL
https://realworld-docs.netlify.app/specifications/backend/api-response-format.md#users-for-authentication
I got this URL from this page https://realworld-docs.netlify.app/specifications/backend/endpoints/
When I click on the User anchor tag, I get redirected to a page that shows 404.


| bug | low | Critical |
2,555,047,522 | ui | [bug]: Electron vite init | ### Describe the bug
I'm trying to install shadcn on [electron vite](https://electron-vite.org/) but when I use `bunx --bun shadcn@latest init` I get Verifying framework error
```
✔ Preflight checks.
✖ Verifying framework.
We could not detect a supported framework at I:\Nodejs\test-app2.
Visit https://ui.shadcn.com/docs/installation/manual to manually configure your project.
Once configured, you can use the cli to add components.
```
I know there are other module for electron, vite and shadcn but I want to use officials
When I use shadcn cli 0.8.0 I can install successful but not for latest version
### Affected component/components
None
### How to reproduce
1. init electron vite project
2. install tailwind
3. try to init shadcn using bunx
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
✔ Preflight checks.
✖ Verifying framework.
We could not detect a supported framework at I:\Nodejs\test-app2.
Visit https://ui.shadcn.com/docs/installation/manual to manually configure your project.
Once configured, you can use the cli to add components.
```
### System Info
```bash
Windows 10
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,555,094,151 | godot | Texture2DRD thumbnail preview fails to generate | ### Tested versions
- 4.3 release version
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 with Max-Q Design (NVIDIA; 31.0.15.3161) - Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 Threads)
### Issue description
I am using Texture2DRD resource as base class for a custom resource for procedural textures.
I have previously written this system using:
- ImageTexture - rather fast (1 - 30 ms) with added utility of images but unfortunately also constantly change their internal IDs thus making their use with git horrible.
- PortableCompressedTexture2D - doesn't change it's IDs - but changing the image is slow (10-100 ms)
- Texture2DRD - my current attempt, the holy grail in the sense that they don't store any Image data on disk (no huge data arrays OR constantly changing IDs) and are fast enough that not saving the data is actually reasonable (0.1-1 ms). This makes them rather like a NoiseTexture in how they are saved to disk with just their parameters.
I have a nice little compute shader pipeline for generating procedural textures and want to extend this system.
However the use of Texture2DRD has a couple of issues - I am struggling with formatting (see below), I haven't figured out how to create the mipmaps in the RenderingDevice yet, and have found a bug in this resource with getting errors in the console whenever the resource is edited and then the project saved (recreated by the gdscript below).
The thumbnail for Texture2DRD fails to generate and a error appears in the console:
```
Expected Image data size of 8x8x12 (RGBFloat without mipmaps) = 768 bytes, got 1024 bytes instead.
servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:1302 - Condition "image->is_empty()" is true. Returning: Ref<Image>()
```
I believe this issue is the thumbnail attempting to generate.
You can see in the screenshots the mesh get's the texture's colour applied correctly - but the thumbnail fails to generate.
Note even though the image RID I am attaching is RGBAFloat it always complains about RGBFloat - I suspect this is the actual issue - i.e. the size is wrong because it is expecting the wrong format.
I also have another issue with this that the format DATA_FORMAT_R32G32B32A32_SFLOAT and it's equivalent image format FORMAT_RGBAF look very different from the format FORMAT_RGBA8 when applied to a material and I cannot apparently use FORMAT_RGBA8 on the rendering device because it doesn't accept the usage bits and thus cannot use it for my procedural texturing - I suspect there is something I need to do the the FORMAT_RGBAF data to make it appear the same as the FORMAT_RGBA8 but I'm not sure what so any advice would be welcome.


### Steps to reproduce
Create a script for a resource inheriting from a Texture2DRD resource in the filesystem and assign it an RID - get the above warning.
The GD script in MRP will do this.
### Minimal reproduction project (MRP)
```
@tool
class_name TextureRD
extends Texture2DRD
static var rd := RenderingServer.get_rendering_device()
## Build this image
@export var build := false:
set(v):
if v:
_build()
func _build() -> void:
var format := RDTextureFormat.new()
format.width = 8
format.height = 8
format.usage_bits = (
RenderingDevice.TEXTURE_USAGE_CAN_COPY_FROM_BIT
| RenderingDevice.TEXTURE_USAGE_CAN_UPDATE_BIT
| RenderingDevice.TEXTURE_USAGE_STORAGE_BIT
| RenderingDevice.TEXTURE_USAGE_SAMPLING_BIT
| RenderingDevice.TEXTURE_USAGE_CAN_COPY_TO_BIT
)
format.format = rd.DATA_FORMAT_R32G32B32A32_SFLOAT
var view := RDTextureView.new()
var texture_rid := rd.texture_create(format, view, [])
rd.texture_clear(texture_rid, Color.AQUA, 0, 1, 0, 1)
texture_rd_rid = texture_rid
``` | bug,topic:rendering,topic:editor | low | Critical |
2,555,106,236 | neovim | multigrid UI: wrong command line position when multigrid UI reconnects | ### Problem
When a multigrid UI reconnects, the command line is at the top of the screen and invisible.
### Steps to reproduce
1. Start a server `nvim --headless --listen /tmp/nvim.socket`
2. Connect with a multigrid UI like Neovide `neovide --server /tmp/nvim.socket`
3. Disconnect from the server `:call chanclose(nvim_list_uis()[0].chan)`
4. Reconnect `neovide --server /tmp/nvim.socket`
5. Press `:` to enter command mode, and type something
Observe that the cursor is at the top, and the typed text is invisible. The code completion menu on the other hand is in the correct place.

### Expected behavior
The UI should work like normal.
### Nvim version (nvim -v)
0.10.1 and latest master
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
Arch Linux
### Terminal name/version
Neovide 0.13.3
### $TERM environment variable
N/A
### Installation
pacman | bug,ui,has:workaround,ui-extensibility | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.