id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,534,389,516 | go | hash: add XOF interface | ## Background
An extendable output function (XOF) is a hash function with arbitrary or unlimited output length. They are very useful for tasks like key derivation, random number generation, and even encryption.
We have two (or rather three) XOF in x/crypto already: SHAKE in x/crypto/sha3 and BLAKE2X in x/crypto/blake2b and x/crypto/blake2s. In third-party modules at least KangarooTwelve in github.com/cloudflare/circl/xof/k12 and BLAKE3 in lukechampine.com/blake3 and github.com/zeebo/blake3 see some use.
The SHAKE XOFs return a ShakeHash interface.
```
type ShakeHash interface {
hash.Hash
// Read reads more output from the hash; reading affects the hash's
// state. (ShakeHash.Read is thus very different from Hash.Sum)
// It never returns an error, but subsequent calls to Write or Sum
// will panic.
io.Reader
// Clone returns a copy of the ShakeHash in its current state.
Clone() ShakeHash
}
```
The BLAKE2X XOFs return a blake2[bs].XOF interface.
```
type XOF interface {
// Write absorbs more data into the hash's state. It may panic if called
// after Read.
io.Writer
// Read reads more output from the hash. It returns io.EOF if the limit
// has been reached.
io.Reader
// Clone returns a copy of the XOF in its current state.
Clone() XOF
// Reset resets the XOF to its initial state.
Reset()
}
```
## Proposal
> [!IMPORTANT]
> Current proposal at https://github.com/golang/go/issues/69518#issuecomment-2429048538.
Having a standard library interface for XOFs would help prevent fragmentation and help building modular higher-level implementations (although deployments should generally select one concrete implementation).
```
package hash
type XOF interface {
// Write absorbs more data into the XOF's state. It panics if called
// after Read.
io.Writer
// Read reads more output from the XOF. It may return io.EOF if there
// is a limit to the XOF output length.
io.Reader
// Reset resets the XOF to its initial state.
Reset()
}
```
### Notes
The proposed interface is a subset of the two existing ones, so values from those packages can be reused. It is also compatible with the K12 implementation. https://go.dev/play/p/AtvfO8Tkbgp
Sum and Size (from ShakeHash) are not included because XOFs don't necessarily have a "default" output size. BlockSize might potentially be useful but depends on the implementation anyway, as is not worth breaking compatibility with blake2[bs].XOF.
Clone is not included because the existing interfaces return an interface type from it. (Maybe this would have been doable with generics if x/crypto/sha3 and x/crypto/blake2[bs] returned concrete implementations rather than interfaces, but we don't want to make every use of hash.XOF generic anyway.) I will file a separate proposal to add hash.Clone and hash.CloneXOF as helper functions.
Note however that the BLAKE3 implementations differ in that they return the Reader from a method on the Writer. This is probably to allow interleaving Write and Read calls.
```
h := blake3.New()
h.Write([]byte("foo"))
d := h.Digest()
h.Write([]byte("bar"))
d.Read(...) // won't include bar
```
As long as we add hash.CloneXOF or expose Clone on the underlying XOF implementations (which both ShakeHash and blake2[bs].XOF do), cloning can be used to the same effect (with a little less compile-time safety).
```
h := spiffyxof.New()
h.Write([]byte("foo"))
d := h.Clone()
h.Write([]byte("bar"))
d.Read(...)
// careful not to call d.Write
```
/cc @golang/security @cpu | Proposal,Proposal-Accepted,Proposal-Crypto | medium | Critical |
2,534,393,552 | go | x/vuln: improve documentation | We should improve documentation on govulncheck:
- make it clear what are the main benefits of govulncheck, i.e., why should people use it
- explain how and why govulncheck can generate false positives and how to deal with them
- explain streaming JSON and OpenVex output as means of govulncheck integrations
- if applicable: how to filter vulnerabilities | Documentation,vulncheck or vulndb | low | Minor |
2,534,397,792 | vscode | [themes] support sematic colorization for 'excludedCode' | Type: <b>Bug</b>
## Issue Description ##
It's very hard to determine if a code block inside a C# preprocessor directive is compiled or not when using a default theme:

## Steps to Reproduce ##
- Enable a default theme like Dark Modern
- Type some C# code inside an undefined preprocessor directive
- Observe the colors
## Expected Behavior ##
Code is dimmed
## Actual Behavior ##
Code isn't dimmed
## Logs ##
https://github.com/user-attachments/assets/996361b2-76b6-42ee-8f87-cf5052cd6f7c
## Remarks ##
- This issue is moved from [vscode-csharp](https://github.com/dotnet/vscode-csharp/issues/7573) because it was reported that it must be resolved by vscode
- Visual Studio 2019 Light and Visual Studio 2019 Dark themes don't have this issue
## Environment information ##
**VSCode version**: 1.93.1
**C# Extension**: 2.45.25
**Using OmniSharp**: false
<details><summary>Dotnet Information</summary>
.NET SDK:
Version: 8.0.400
Commit: 36fe6dda56
Workload version: 8.0.400-manifests.6c274a57
MSBuild version: 17.11.3+0c8610977
Runtime Environment:
OS Name: Windows
OS Version: 10.0.19045
OS Platform: Windows
RID: win-x64
Base Path: C:\Program Files\dotnet\sdk\8.0.400\
.NET workloads installed:
Configured to use loose manifests when installing new manifests.
There are no installed workloads to display.
Host:
Version: 8.0.8
Architecture: x64
Commit: 08338fcaa5
.NET SDKs installed:
8.0.400 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 6.0.33 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 7.0.20 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 8.0.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 3.1.32 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 6.0.33 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 7.0.20 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 8.0.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 6.0.33 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 7.0.20 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 8.0.8 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Other architectures found:
x86 [C:\Program Files (x86)\dotnet]
registered at [HKLM\SOFTWARE\dotnet\Setup\InstalledVersions\x86\InstallLocation]
Environment variables:
Not set
global.json file:
Not found
Learn more:
https://aka.ms/dotnet/info
Download .NET:
https://aka.ms/dotnet/download
</details>
<details><summary>Visual Studio Code Extensions</summary>
|Extension|Author|Version|Folder Name|
|---|---|---|---|
|bracket-peek|jomeinaster|1.4.4|jomeinaster.bracket-peek-1.4.4|
|csdevkit|ms-dotnettools|1.10.18|ms-dotnettools.csdevkit-1.10.18-win32-x64|
|csharp|ms-dotnettools|2.45.25|ms-dotnettools.csharp-2.45.25-win32-x64|
|gitlens|eamodio|15.5.1|eamodio.gitlens-15.5.1|
|shaderlabvscodefree|amlovey|1.3.6|amlovey.shaderlabvscodefree-1.3.6|
|unity-toolbox|pixl|100.0.3|pixl.unity-toolbox-100.0.3|
|vscode-dotnet-runtime|ms-dotnettools|2.1.5|ms-dotnettools.vscode-dotnet-runtime-2.1.5|
|vstuc|visualstudiotoolsforunity|1.0.3|visualstudiotoolsforunity.vstuc-1.0.3|;
</details>
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-10875H CPU @ 2.30GHz (16 x 2304)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.84GB (18.74GB free)|
|Process Argv|--crash-reporter-id c50731a0-e4b0-451e-8660-05259dfe0822|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (8)</summary>
Extension|Author (truncated)|Version
---|---|---
shaderlabvscodefree|aml|1.3.6
gitlens|eam|15.5.1
bracket-peek|jom|1.4.4
csdevkit|ms-|1.10.18
csharp|ms-|2.45.25
vscode-dotnet-runtime|ms-|2.1.5
unity-toolbox|pix|100.0.3
vstuc|vis|1.0.4
</details>
<!-- generated by issue reporter --> | feature-request,themes | low | Critical |
2,534,427,791 | vscode | Add dual page visualization for the source code editor | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Similar to PDF viewers, I'd like to read a source file in dual columns, where they're automatically synced, and the second column starts at the next line after the end of the first column. This is similar to most PDF viewers where you can view "dual pages" in one screen. This is helpful in case you have code formatted as lines of 80 characters, and you can easily fit two of them side-by-side.
| feature-request,editor-core | low | Minor |
2,534,428,278 | godot | PackedInt32Array handling in Editor is broken, SoftBody almost unusable. | ### Tested versions
Reproducible: 4.3 mono stable, 4.4.dev2 mono
Working: 4.1 mono
### System information
Windows 7 - Godot 4.3 stable mono
### Issue description
Editing PackedInt32Array from the editor (SoftBody3D pinned points in this example) is fundamentally broken, changing any value other than the last removes the last entry. Clicking on the pinned points does nothing. This makes SoftBody barely usable.
Am I the only one having those issues or this have gone unnoticed somehow?
### Steps to reproduce
1. Make a SoftBody3D with 3 pinned points.
2. Change the first or second point's index.
OR
3. Click on any point in the 3D view to add it to the array - does not work.
### Minimal reproduction project (MRP)
[softbodyisbroken.zip](https://github.com/user-attachments/files/17048765/softbodyisbroken.zip)
| bug,topic:editor | low | Critical |
2,534,451,426 | godot | Exported PackedArrays are consistently reset in open scenes on script reload | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22621 - Vulkan (Forward+) - dedicated AMD Radeon RX 5700 XT (Advanced Micro Devices, Inc.; 31.0.24031.5001) - AMD Ryzen 9 3950X 16-Core Processor (32 Threads)
### Issue description
If you save/touch a script that exports a PackedArray (I have tested Float32/64, Int32/64 and Color), then when reloading the script the editor silently resets the array to the default value given in the script in any scene that's opened.
I have tested and this does not occur with regular `Array[float]` or other `Array`s.
If the undo history has modifications to the exported array, you can go back to that and then back to the present to restore it.
Minimum reproduction script:
```
class_name ArrayTester extends Node
@export var arr : PackedFloat64Array = [1, 2, 3]
```
### Steps to reproduce
1. Create an empty project, or if using the MRP skip to step 4
2. Insert a node in the scene
3. Attach the above script to the node
4. Touch or save the script file to cause a reload (no need to make any actual modifications)
5. Focus the editor window
### Minimal reproduction project (MRP)
[repro_97156.zip](https://github.com/user-attachments/files/17077699/repro_97156.zip) | bug,topic:editor,confirmed | low | Minor |
2,534,457,817 | PowerToys | File explorer add-ons PDF viewer not working | ### Microsoft PowerToys version
0.84.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
File Explorer: Preview Pane
### Steps to reproduce

open file explorer, select PDF file
### ✔️ Expected Behavior
Preview of PDF file
### ❌ Actual Behavior
Error message: this file can not be previewed
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,534,482,895 | go | hash: add Clone | There isn't a general way to clone the state of a hash.Hash, but #20573 introduced the concept of hash.Hash implementations also implementing [encoding.BinaryMarshaler](https://pkg.go.dev/encoding#BinaryMarshaler) and [encoding.BinaryUnmarshaler](https://pkg.go.dev/encoding#BinaryUnmarshaler), and the hash.Hash docs commit our implementations to doing that.
> Hash implementations in the standard library (e.g. [hash/crc32](https://pkg.go.dev/hash/crc32) and [crypto/sha256](https://pkg.go.dev/crypto/sha256)) implement the [encoding.BinaryMarshaler](https://pkg.go.dev/encoding#BinaryMarshaler) and [encoding.BinaryUnmarshaler](https://pkg.go.dev/encoding#BinaryUnmarshaler) interfaces.
That allows cloning the hash state without recomputing it, as done in HMAC.
https://github.com/golang/go/blob/db40d1a4c434e319e70af87ef1024a211a9e5a98/src/crypto/hmac/hmac.go#L96-L103
However, it's obscure and pretty clunky to use.
I propose we add a `hash.Clone` helper function.
```
package hash
// Clone returns a separate Hash instance with the same state as h.
//
// h must implement encoding.BinaryMarshaler and encoding.BinaryUnmarshaler,
// or be provided by the Go standard library. Otherwise, Clone returns an error.
func Clone(h Hash) (Hash, error)
```
In practice, we should only fallback to BinaryMarshaler + BinaryUnmarshaler for the general case, while for standard library implementations we can do an undocumented interface upgrade to `interface { Clone() Hash }`. In that sense, `hash.Clone` is a way to hide the interface upgrade as a more discoverable and easier to use function.
(Yet another example of why we should be returning concrete types everywhere rather than interfaces.)
### CloneXOF
If #69518 is accepted, I propose we also add hash.CloneXOF.
```
package hash
// CloneXOF returns a separate XOF instance with the same state as h.
//
// h must implement encoding.BinaryMarshaler and encoding.BinaryUnmarshaler,
// or be provided by the Go standard library or by the golang.org/x/crypto module
// (starting at version v0.x.y). Otherwise, Clone returns an error.
func CloneXOF(h XOF) (XOF, error)
```
None of our XOFs actually implement BinaryMarshaler + BinaryUnmarshaler, but they have their own interface methods `Clone() ShakeHash` and `Clone() XOF` that each return an interface. I can't really think of a way to use them from CloneXOF, so instead we can add hidden methods `CloneXOF() hash.XOF` and interface upgrade to them.
As we look at moving packages from x/crypto to the standard library (#65269) we should switch x/crypto/sha3 and x/crypto/blake2[bs] from returning interfaces to returning concrete types, at least for XOFs. Then they can have a `Clone()` method that returns a concrete type, and a `CloneXOF()` method that returns a hash.XOF interface and enables `hash.CloneXOF`.
(If anyone has better ideas for how to make this less redundant, I would welcome them. I considered and rejected using reflect to call the existing Clone methods because hash is a pretty core package. This sort of interface-method-that-needs-to-return-a-value-implementing-said-interface scenarios are always annoying.)
/cc @golang/security @cpu @qmuntal (who filed something similar in #69293, as I found while searching refs for this) | Proposal,Proposal-Accepted,Proposal-Crypto | medium | Critical |
2,534,512,881 | pytorch | Failure of iOS Build Test: Build (default, 1, 1, macos-14-xlarge, SIMULATOR, arm64) | > NOTE: Remember to label this issue with "`ci: sev`"
## Current Status
Fail with every PR with label https://github.com/pytorch/pytorch/labels/ciflow%2Fperiodic
## Error looks like
failure occurs due to linker issue on macos SIMULATOR
```
Undefined symbols for architecture arm64:
"torch::jit::mobile::Module::find_method(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) const", referenced from:
-[TestAppTests runModel:] in TestLiteInterpreter.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
## Incident timeline (all times pacific)
This issue is observed since last week and its still on going
## User impact
Blocks testing / merging
## Root cause
*What was the root cause of this issue?*
## Mitigation
*How did we mitigate the issue?*
## Prevention/followups
*How do we prevent issues like this in the future?*
cc @seemethere @malfet @pytorch/pytorch-dev-infra | module: ci,triaged,module: ios | low | Critical |
2,534,531,841 | go | x/tools/gopls: long initial workspace load durations for workspace with many packages | ### gopls version
```
Build info
----------
golang.org/x/tools/gopls (devel)
golang.org/x/tools/gopls@(devel)
github.com/BurntSushi/toml@v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
github.com/google/go-cmp@v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
golang.org/x/exp/typeparams@v0.0.0-20221212164502-fae10dda9338 h1:2O2DON6y3XMJiQRAS1UWU+54aec2uopH3x7MAiqGW6Y=
golang.org/x/mod@v0.21.0 h1:vvrHzRwRfVKSiLrG+d4FMl/Qi4ukBCE6kZlTUkDYRT0=
golang.org/x/sync@v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/telemetry@v0.0.0-20240829154258-f29ab539cc98 h1:Wm3cG5X6sZ0RSVRc/H1/sciC4AT6HAKgLCSH2lbpR/c=
golang.org/x/text@v0.18.0 h1:XvMDiNzPAl0jr17s6W9lcaIhGUfUORdGCNsuLmPG224=
golang.org/x/tools@v0.21.1-0.20240508182429-e35e4ccd0d2d => ../
golang.org/x/vuln@v1.0.4 h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
honnef.co/go/tools@v0.4.7 h1:9MDAWxMoSnB6QoSqiVr7P5mtkT9pOc1kSxchzPCnqJs=
mvdan.cc/gofumpt@v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU=
mvdan.cc/xurls/v2@v2.5.0 h1:lyBNOm8Wo71UknhUs4QTFUNNMyxy2JEIaKKo0RWOh+8=
go: go1.23.1
```
### go env
```shell
$ bin/go env
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/rma/Library/Caches/go-build'
GOENV='/Users/rma/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/rma/go/pkg/mod'
GONOPROXY=<REDACTED>
GONOSUMDB=<REDACTED>
GOOS='darwin'
GOPATH='/Users/rma/go'
GOPRIVATE=<REDACTED>
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/rma/.cache/gocode/sdk/1.22'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/Users/rma/.cache/gocode/sdk/1.22/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.22.6'
GCCGO='gccgo'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/rma/stripe/gocode/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/wc/_g6hd_8d3dnb87c04x67w0l40000gn/T/go-build681359692=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
My organization within the company is responsible for maintaining a large Go monorepo (~20k packages). In this monorepo, all packages fall under 1 Go module.
For the most part, we run a fairly off-the-shelf gopls setup without any customization of note. This is what our stats look like (from `bea7373d8a8268c2e3a260c1b8d41f96c4f7489e`):
```
$ gopls stats -anon
Initializing workspace... done (3m26.109285333s)
Gathering bug reports... done (1.781858458s)
Querying memstats... done (1.284304292s)
Querying workspace stats... done (701.076375ms)
Collecting directory info... done (7.958301292s)
{
"DirStats": {
"Files": 288174,
"TestdataFiles": 8243,
"GoFiles": 77688,
"ModFiles": 20,
"Dirs": 64532
},
"GOARCH": "arm64",
"GOOS": "darwin",
"GOPACKAGESDRIVER": "",
"GOPLSCACHE": "",
"GoVersion": "go1.23.1",
"GoplsVersion": "(devel)",
"InitialWorkspaceLoadDuration": "3m26.109285333s",
"MemStats": {
"HeapAlloc": 2970803424,
"HeapInUse": 4669145088,
"TotalAlloc": 36820682064
},
"WorkspaceStats": {
"Files": {
"Total": 72996,
"Largest": 9189430,
"Errs": 0
},
"Views": [
{
"GoCommandVersion": "go1.22.0",
"AllPackages": {
"Packages": 23170,
"LargestPackage": 624,
"CompiledGoFiles": 89502,
"Modules": 1031
},
"WorkspacePackages": {
"Packages": 18127,
"LargestPackage": 354,
"CompiledGoFiles": 61173,
"Modules": 1
},
"Diagnostics": 95
}
]
}
}
```
### What did you see happen?
The experience of using gopls is good (i.e. rpcs like jump-to-definition, refactor symbol, etc. are fast) once initial workspace load has occurred, but initial workspace loads are a poor experience for our users. This poor experience is owed to two behaviors:
Firstly, initial workspace loads (IWLs) are slow. This is partially owed to the performance of the expensive upfront `go list` call itself:
```
$ time go list -e -json=Name,ImportPath,Error,Dir,GoFiles,IgnoredGoFiles,IgnoredOtherFiles,CFiles,CgoFiles,CXXFiles,MFiles,HFiles,FFiles,SFiles,SwigFiles,SwigCXXFiles,SysoFiles,TestGoFiles,XTestGoFiles,CompiledGoFiles,Export,DepOnly,Imports,ImportMap,TestImports,XTestImports,ForTest,DepsErrors,Module,EmbedFiles -compiled=true -test=true -export=false -deps=true -find=false -pgo=off -- /Users/rma/stripe/gocode/... builtin
[ ...A bunch of redacted `go list` output... ]
go list -e -compiled=true -test=true -export=false -deps=true -find=false - 31.41s user 48.00s system 120% cpu 1:06.01 total
```
…but the majority of the time is spent on calling `packages.Load` on all of the packages that `go list` returns.
Secondly, because [IWLs are blocking](https://github.com/golang/tools/blob/bea7373d8a8268c2e3a260c1b8d41f96c4f7489e/gopls/internal/cache/snapshot.go#L1331-L1333) and VSCode waits on code actions before the IDE is able to save files, users are unable to save files/exit their IDEs/etc:
> ```Saving 'main.go': Getting code actions from ''Go', 'ESLint', 'GitHub Copilot Chat'' (configure).```
…while IWL is incomplete. This is partially a corollary of the first issue. While it is true that making code actions non-blocking for saves would fix this UX issue, if IWL were fast, this UX issue would not be perceived by users in the first place.
We have dipped our toes in trying to patch this behavior. For example, an early attempt at hacking around this behavior (on a `v1.16.1` base) [no-ops the initialization routine altogether](https://gist.github.com/rma-stripe/c9c5573025209389678af3a9755137e6), relying solely on gopls loading in package data as files are manually opened by the user. As expected, this hack has some large tradeoffs associated with it, because initialization is a load bearing part of gopls: it breaks features like "Rename symbol" and "Find all references", and it occasionally causes the wrong imports to be pulled in. However, buggy as it is, this hack in essence describes what we think the desirable IWL behavior would be:
* The initial snapshot at gopls startup does not await IWL
* On startup, an async IWL routine runs `go list` and lazy loads package data into the currently "active" snapshot
* While this async routine has not completed, language server RPCs which rely on all workspace packages being loaded (such as "Find all references" are not available)
### What did you expect to see?
This issue is a feature request.
* Does the newly proposed workspace initialization approach align with the gopls roadmap?
* Is there a better approach (in the interim) to addressing the slow/blocking behavior of IWLs than the no-op/hack solution linked above?
We would appreciate any tips/guidance in writing an upstream-able patch, or a better hack.
### Editor and settings
_No response_
### Logs
_No response_ | gopls,Tools,gopls/metadata | low | Critical |
2,534,610,176 | pytorch | How does fake tensor works with tensor subclass in torch.compile? | ### 🐛 Describe the bug
I'm working on an example for quantized tensor subclass + DTensor (tensor parallel) + compile: https://github.com/pytorch/ao/pull/785
the test works with eager mode, but failed due to a shape mismatch in compile right now.
input shape: (128, 1024), linear weight shape: (512, 1024) (out * in)
Errors in torch.mm op with fake tensor:
```
[rank2]: result = fn(*args, is_out=(out is not None), **kwargs) # type: ignore[arg-type] 12:53:17 [554/1896]
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_decomp/decompositions.py", line 4333, in matmul
[rank2]: return torch.mm(tensor1, tensor2)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
[rank2]: return disable_fn(*args, **kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
[rank2]: return fn(*args, **kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/distributed/tensor/_api.py", line 340, in __torch_dispatch__
[rank2]: return DTensor._op_dispatcher.dispatch(
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 215, in dispatch
[rank2]: local_results = op_call(*local_tensor_args, **op_info.local_kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
[rank2]: return self._op(*args, **kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torchao-0.6.0+gitbd264f91-py3.10-linux-x86_64.egg/torchao/utils.py", line 372, in _dispatch__torch_function__
[rank2]: return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torchao-0.6.0+gitbd264f91-py3.10-linux-x86_64.egg/torchao/utils.py", line 355, in wrapper
[rank2]: return func(f, types, args, kwargs)
[rank2]: File "/data/users/jerryzh/ao/tutorials/developer_api_guide/tensor_parallel.py", line 86, in _
[rank2]: return aten.mm(input_tensor, weight_tensor)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_ops.py", line 1116, in __call__
[rank2]: return self._op(*args, **(kwargs or {}))
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/utils/_stats.py", line 21, in wrapper
[rank2]: return fn(*args, **kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1238, in __torch_dispatch__
[rank2]: return self.dispatch(func, types, args, kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1692, in dispatch
[rank2]: return self._cached_dispatch_impl(func, types, args, kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1339, in _cached_dispatch_impl
[rank2]: output = self._dispatch_impl(func, types, args, kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2039, in _dispatch_impl
[rank2]: r = func(*args, **kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
[rank2]: return self._op(*args, **kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 273, in _fn
[rank2]: result = fn(*args, **kwargs)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/_meta_registrations.py", line 2100, in meta_mm
[rank2]: torch._check(
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/__init__.py", line 1565, in _check
[rank2]: _check_with(RuntimeError, cond, message)
[rank2]: File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/__init__.py", line 1547, in _check_with
[rank2]: raise error_type(message_evaluated)
[rank2]: torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in function linear>(*(DTensor(local_tensor=FakeTensor(..., device='cuda:0', size=(128, 1024)), device_mesh=DeviceMesh('cuda', [0, 1,
2, 3]), placements=(Replicate(),)), DTensor(local_tensor=MyDTypeTensorTP(data=FakeTensor(..., device='cuda:0', size=(512, 1024)), shape=torch.Size([512, 1024]), device=cuda:0, dtype=torch.float32, requires_grad=Fa
lse), device_mesh=DeviceMesh('cuda', [0, 1, 2, 3]), placements=(Shard(dim=0),)), None), **{}):
[rank2]: a and b must have same reduction dim, but got [128, 1024] X [512, 1024].
```
transpose implementation looks like the following:
```
@implements(aten.t.default)
def _(func, types, args, kwargs):
tensor = args[0]
print("before transpose, ", tensor.shape)
shape = tensor.shape[::-1]
new = tensor.__class__(tensor.layout_tensor.t(), shape, tensor.dtype)
print("after transpose:", new.shape)
return return_and_correct_aliasing(func, args, kwargs, new)
```
It seems that the fake tensor did not pick up the changes to the shape in this case.
Repro:
* checkout https://github.com/pytorch/ao/pull/785
* build torchao (python setup.py install/develop)
* run: `with-proxy torchrun --standalone --nnodes=1 --nproc-per-node=4 tutorials/developer_api_guide/tensor_parallel.py`
### Versions
main
cc @ezyang @albanD @chauhang @penguinwu @eellison @zou3519 @bdhirsh | triaged,tensor subclass,oncall: pt2,module: fakeTensor,module: pt2-dispatcher | low | Critical |
2,534,642,810 | godot | GPUParticles3D inherit velocity is broken | ### Tested versions
4.3
### System information
Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 960 (NVIDIA; 32.0.15.6081) - Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz (4 Threads)
### Issue description
inherit_velocity_ratio doesn't work as expected. Particles seem to emit ahead of where they're supposed to and this gap increases the higher the velocity.
The particle effect is placed at the back of the rocket, yet at this velocity they're being emitted near the front!

### Steps to reproduce
Set inherit_velocity_ratio to greater than zero.
### Minimal reproduction project (MRP)
[particles_velocity_ratio.zip](https://github.com/user-attachments/files/17049711/particles_velocity_ratio.zip)

| bug,topic:3d,topic:particles | medium | Critical |
2,534,683,233 | yt-dlp | Support for multiple videos in the same LeFigaro article | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
France
### Example URLs
https://www.lefigaro.fr/international/guerre-israel-hamas-des-centaines-de-membres-du-hezbollah-blesses-dans-l-explosion-de-leurs-bipeurs-au-liban-20240917
### Provide a description that is worded well enough to be understood
The page contains 2 videos, one at the top of 30s and one in the middle of 54s.
The videos are using jwplatform which is already supported.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.lefigaro.fr/international/guerre-israel-hamas-des-centaines-de-membres-du-hezbollah-blesses-dans-l-explosion-de-leurs-bipeurs-au-liban-20240917']
[debug] Portable config "/g/Share/yt-dlp.conf": ['-o', '%(uploader)s - %(title)s_%(id)s.%(ext)s', '--no-mtime', '--no-check-certificate', '-f', '(bestvideo[ext=mp4][height<=1080][vcodec!^=av01][vcodec!*=vp09]+bestaudio[ext=m4a])/(bestvideo[ext=webm][height<=1080][vcodec^=vp9][vcodeci!*=vp09][vcodec!*=vp9.2]+bestaudio[ext=webm])/(bestvideo+bestaudio)/1080p/1080p60__source_/best/720p/720p60/480p/360p/240p/144p/0', '--write-description', '--write-thumbnail', '--write-sub', '--write-auto-sub', '--sub-lang', 'en,fr,-live_chat', '--sub-format', 'ass/srt/best', '--write-annotations', '--write-info-json', '--fixup', 'never', '--print-to-file', 'filename', 'yt-dlp.log']
[debug] User config "/home/sebbu/.config/yt-dlp/config": ['-o', '%(uploader)s - %(title)s_%(id)s.%(ext)s', '--no-mtime', '--no-check-certificate', '-f', 'bestvideo[ext=mp4][height<=1080][vcodec!^=av01]+bestaudio[ext=m4a]/bestvideo[ext=webm][height<=1080][vcodec^=vp9][vcodec!*=vp9.2]+bestaudio[ext=webm]/bestvideo+bestaudio/1080p/best/720p/480p/360p/240p/144p/0', '--write-description', '--write-thumbnail', '--write-sub', '--write-auto-sub', '--sub-lang', 'en,fr,-live_chat', '--sub-format', 'ass/srt/best', '--write-annotations', '--write-info-json', '--add-metadata', '--fixup', 'never']
[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (zip)
[debug] Python 3.12.6 (CPython x86_64 64bit) - MSYS_NT-10.0-19045-3.5.4-0bc1222b.x86_64-x86_64-64bit-WindowsPE (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe present
[debug] Optional libraries: certifi-2024.07.04, sqlite3-3.46.1, urllib3-2.0.7
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Plugin directories: ['/home/sebbu/.config/yt-dlp/plugins/yt-dlp-ChromeCookieUnlock-2024.04.29/yt_dlp_plugins']
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.lefigaro.fr/international/guerre-israel-hamas-des-centaines-de-membres-du-hezbollah-blesses-dans-l-explosion-de-leurs-bipeurs-au-liban-20240917
[generic] guerre-israel-hamas-des-centaines-de-membres-du-hezbollah-blesses-dans-l-explosion-de-leurs-bipeurs-au-liban-20240917: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] guerre-israel-hamas-des-centaines-de-membres-du-hezbollah-blesses-dans-l-explosion-de-leurs-bipeurs-au-liban-20240917: Extracting information
[debug] Looking for embeds
[debug] Identified a JSON LD
[LeFigaroVideoEmbed] Extracting URL: https://video.lefigaro.fr/embed/figaro/video/liban-en-video-lexplosion-simultanee-de-plusieurs-bipeurs-du-hezbollah/#__youtubedl_smuggle=%7B%22force_videoid%22%3A+%22guerre-israel-hamas-des-centaines-de-membres-du-hezbollah-blesses-dans-l-explosion-de-leurs-bipeurs-au-liban-20240917%22%2C+%22to_generic%22%3A+true%2C+%22referer%22%3A+%22https%3A%2F%2Fwww.lefigaro.fr%2Finternational%2Fguerre-israel-hamas-des-centaines-de-membres-du-hezbollah-blesses-dans-l-explosion-de-leurs-bipeurs-au-liban-20240917%22%7D
[LeFigaroVideoEmbed] liban-en-video-lexplosion-simultanee-de-plusieurs-bipeurs-du-hezbollah: Downloading webpage
[JWPlatform] Extracting URL: jwplatform:AoIFHIcB
[JWPlatform] AoIFHIcB: Downloading JSON metadata
[JWPlatform] AoIFHIcB: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] AoIFHIcB: Downloading 1 format(s): 1080p
[info] Writing '%(filename)s' to: yt-dlp.log
[info] Writing video description to: NA - Liban : en vidéo, l'explosion simultanée de plusieurs bipeurs du Hezbollah_AoIFHIcB.description
[info] There are no subtitles for the requested languages
Deleting existing file NA - Liban : en vidéo, l'explosion simultanée de plusieurs bipeurs du Hezbollah_AoIFHIcB.jpg
[info] Downloading video thumbnail 0 ...
[info] Writing video thumbnail 0 to: NA - Liban : en vidéo, l'explosion simultanée de plusieurs bipeurs du Hezbollah_AoIFHIcB.jpg
[info] Writing video metadata as JSON to: NA - Liban : en vidéo, l'explosion simultanée de plusieurs bipeurs du Hezbollah_AoIFHIcB.info.json
WARNING: There are no annotations to write.
[debug] Invoking http downloader on "https://cdn.jwplayer.com/videos/AoIFHIcB-rsUvELyf.mp4"
[download] NA - Liban : en vidéo, l'explosion simultanée de plusieurs bipeurs du Hezbollah_AoIFHIcB.mp4 has already been downloaded
[download] 100% of 14.92MiB
[Metadata] Adding metadata to "NA - Liban : en vidéo, l'explosion simultanée de plusieurs bipeurs du Hezbollah_AoIFHIcB.mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:NA - Liban : en vidéo, l'"'"'explosion simultanée de plusieurs bipeurs du Hezbollah_AoIFHIcB.mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Liban : en vidéo, l'"'"'explosion simultanée de plusieurs bipeurs du Hezbollah' -metadata date=20240917 -metadata 'description=Liban : en vidéo, l'"'"'explosion simultanée de plusieurs bipeurs du Hezbollah' -metadata 'synopsis=Liban : en vidéo, l'"'"'explosion simultanée de plusieurs bipeurs du Hezbollah' -metadata purl=https://www.lefigaro.fr/international/guerre-israel-hamas-des-centaines-de-membres-du-hezbollah-blesses-dans-l-explosion-de-leurs-bipeurs-au-liban-20240917 -metadata comment=https://www.lefigaro.fr/international/guerre-israel-hamas-des-centaines-de-membres-du-hezbollah-blesses-dans-l-explosion-de-leurs-bipeurs-au-liban-20240917 -movflags +faststart 'file:NA - Liban : en vidéo, l'"'"'explosion simultanée de plusieurs bipeurs du Hezbollah_AoIFHIcB.temp.mp4'
```
| site-enhancement,triage | low | Critical |
2,534,693,065 | node | http2 server ECONNRESET on http1.1 request | ### Version
v22.9.0
### Platform
```text
Microsoft Windows NT 10.0.19045.0 x64
```
### Subsystem
node:http2
### What steps will reproduce the bug?
http1.1 requests made from [Hoppscotch v2024.8.2](https://hoppscotch.com/) to node js http2 server without http1 fallback cause `Uncaught Error Error: read ECONNRESET`
```js
/**@import { SecureServerOptions } from "node:http2" */
import { createSecureServer } from "node:http2"
import { readFile } from "node:fs/promises";
import { env } from "node:process";
const [
key,
cert
] = await Promise.all([
readFile(env.PEM_KEY, "utf-8"),
readFile(env.PEM_CRT, "utf-8")
]);
/**@type { SecureServerOptions } */
const options = { key, cert, allowHTTP1: false };
const server = createSecureServer(options);
server.listen(8080);
```
### How often does it reproduce? Is there a required condition?
unconditional error
### What is the expected behavior? Why is that the expected behavior?
this error related to server, so being able to catch error on `server` instance, or dont get this error at all (handled implicitly)
### What do you see instead?
i was able to handle this error only using
```js
process.on("uncaughtException", /*...*/)
```
### Additional information
if `allowHTTP1` enabled no error thrown
```js
Uncaught Error Error: read ECONNRESET
at onStreamRead (internal/stream_base_commons:216:20)
at callbackTrampoline (internal/async_hooks:130:17)
--- TickObject ---
at init (internal/inspector_async_hook:25:19)
at emitInitNative (internal/async_hooks:202:43)
at emitInitScript (internal/async_hooks:505:3)
at nextTick (internal/process/task_queues:143:5)
at onDestroy (internal/streams/destroy:116:15)
at Socket._destroy (net:839:5)
at _destroy (internal/streams/destroy:122:10)
at destroy (internal/streams/destroy:84:5)
at Writable.destroy (internal/streams/writable:1122:11)
at onStreamRead (internal/stream_base_commons:216:12)
at callbackTrampoline (internal/async_hooks:130:17)
--- TLSWRAP ---
at init (internal/inspector_async_hook:25:19)
at emitInitNative (internal/async_hooks:202:43)
at TLSSocket._wrapHandle (_tls_wrap:699:24)
at TLSSocket (_tls_wrap:570:18)
at tlsConnectionListener (_tls_wrap:1232:18)
at emit (events:519:28)
at onconnection (net:2259:8)
at callbackTrampoline (internal/async_hooks:130:17)
--- TCPSERVERWRAP ---
at init (internal/inspector_async_hook:25:19)
at emitInitNative (internal/async_hooks:202:43)
at createServerHandle (net:1827:14)
at setupListenHandle (net:1870:14)
at listenInCluster (net:1965:12)
at Server.listen (net:2067:7)
at <anonymous> (c:\Users\VM\Desktop\protov2\src\tmp.js:23:8)
--- await ---
at run (internal/modules/esm/module_job:262:25)
--- await ---
at onImport.tracePromise.__proto__ (internal/modules/esm/loader:483:42)
at processTicksAndRejections (internal/process/task_queues:105:5)
--- await ---
at tracePromise (diagnostics_channel:337:14)
at import (internal/modules/esm/loader:481:21)
at <anonymous> (internal/modules/run_main:176:35)
at asyncRunEntryPointWithESMLoader (internal/modules/run_main:117:11)
at runEntryPointWithESMLoader (internal/modules/run_main:139:19)
at executeUserEntryPoint (internal/modules/run_main:173:5)
at <anonymous> (internal/main/run_main_module:30:49)
``` | http,http2 | low | Critical |
2,534,698,246 | pytorch | aot_export is not currently supported with traceable tensor subclass- error comes when distributed tensor is an input to aot_export_joint_simple | ### 🐛 Describe the bug
When I try multi-gpu on torch with `backend = custom_backend` it leads to the error-
`aot_export is not currently supported with traceable tensor subclass`
The following is the code repo for this-
```
import os
import sys
import time
import torch
import torch.nn as nn
from torch.distributed._tensor import Shard
from torch.distributed._tensor.device_mesh import init_device_mesh
from torch.distributed.tensor.parallel import (
ColwiseParallel,
RowwiseParallel,
parallelize_module,
)
import unittest
import torch
from torch._dynamo.utils import detect_fake_mode
from torch._functorch.aot_autograd import aot_export_joint_simple
from typing import Sequence, Any
class ToyModel(nn.Module):
"""MLP based model"""
def __init__(self):
super(ToyModel, self).__init__()
self.in_proj = nn.Linear(10, 3200)
self.relu = nn.ReLU()
self.out_proj = nn.Linear(3200, 1600)
self.in_proj2 = nn.Linear(1600, 500)
self.out_proj2 = nn.Linear(500, 100)
def forward(self, x):
x = self.out_proj(self.relu(self.in_proj(x)))
x = self.relu(x)
x = self.out_proj2(self.relu(self.in_proj2(x)))
return x
# create a device mesh based on the given world_size.
_world_size = int(os.environ["WORLD_SIZE"])
device_mesh = init_device_mesh(device_type="cuda", mesh_shape=(_world_size,))
_rank = device_mesh.get_rank()
print(f"Starting PyTorch TP example on rank {_rank}.")
assert (
_world_size % 2 == 0
), f"TP examples require even number of GPUs, but got {_world_size} gpus"
# # create model and move it to GPU - init"cuda"_mesh has already mapped GPU ids.
tp_model = ToyModel().to("cuda")
# Custom parallelization plan for the model
tp_model = parallelize_module(
module=tp_model,
device_mesh=device_mesh,
parallelize_plan={
"in_proj": ColwiseParallel(input_layouts=Shard(0)),
"out_proj": RowwiseParallel(output_layouts=Shard(0)),
"in_proj2": ColwiseParallel(input_layouts=Shard(0)),
"out_proj2": RowwiseParallel(output_layouts=Shard(0)),
},
)
torch.manual_seed(0)
inp = torch.rand(20, 10, device="cuda")
python_result = tp_model(inp)
def custom_backend(gm: torch.fx.GraphModule, sample_inputs: Sequence[Any], **kwargs: Any):
fake_mode = detect_fake_mode(sample_inputs)
with unittest.mock.patch.object(fake_mode, "allow_non_fake_inputs", True), fake_mode:
torch_inputs = [input for input in sample_inputs if isinstance(input, torch.Tensor)]
gm = aot_export_joint_simple(
gm,
torch_inputs,
trace_joint=False,
)
return gm
tp_model = torch.compile(
tp_model,
backend=custom_backend,
dynamic=False,
)
custom_backend_result = tp_model(inp)
```
The issue comes in the custom backend. It receives a `<class torch.distributed._tensor.api.DTensor>` which is a traceable suclass resulting in `def is_traceable_wrapper_subclass(t: object) -> TypeGuard[TensorWithFlatten] `returning true.
Command to run- torchrun --nproc_per_node=2 distributed_example.py
### Versions
I see this error in torch2.5 night versions eg: 2.5.0.dev20240905+cu124 but not in torch 2.4
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @zou3519 @bdhirsh @yf225 | oncall: distributed,triaged,oncall: pt2,export-triaged,oncall: export,module: pt2-dispatcher | medium | Critical |
2,534,744,537 | vscode | Terminal message displays Gibberish upon creation by window.createTerminal() API randomly | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.1
- OS Version: Mac OS 14.6.1
Steps to Reproduce:
1. create a new extension, trigger a command to execute the following code:
```
const terminalOptions = {
message: '<this message is the one that going to be shown wierd>',
};
const terminal = vscode.window.createTerminal(terminalOptions);
terminal.show();
```
Randomly (about 20% of the cases), This is how it looks like:
<img width="1269" alt="Screenshot 2024-09-18 at 4 25 35 PM" src="https://github.com/user-attachments/assets/f00e43ec-dccd-4d47-8b5d-ee9b135c0ba1">
If I go to another tab and return to the terminal, it looks fine:
<img width="1267" alt="Screenshot 2024-09-18 at 4 26 09 PM" src="https://github.com/user-attachments/assets/7bf56856-93a8-4ab4-9aff-033f874f7f9a">
Once it's shown correctly, it'll always be fine. I guess the first render of the terminal is the issue here. | bug,terminal-rendering | low | Critical |
2,534,771,136 | godot | Rendering performance regression after upgrading from Metal 3.1 to 3.2 (macOS Sequoia 15.0) | ### Tested versions
- Reproducible in :
Godot Engine v4.4.dev.custom_build.694d3c293
**Metal 3.2** - Forward+ - Using Device #0: Apple - Apple M2 Pro (Apple8)
- Not Reproducible in :
1. Godot Engine v4.4.dev.custom_build.694d3c293 ,
**Metal 3.1**- Forward+ - Using Device #0: Apple - Apple M2 Pro (Apple8)
2. Godot Engine v4.3.stable.official.77dcf97d8
**Vulkan 1.2.283** - Forward+ - Using Device #0: Apple - Apple M2 Pro
### System information
macOS Sequoia 15.0 , Apple M2 Pro
### Issue description
After upgrading macOS to -> macOS Sequoia 15.0
All running projects slow down in Metal 2.3 : 60fps -> 15fps
1. In new system and Metal 3.2 running project decrease FPS frame rate cyclic :
60 -> 15fps … 5sec works 60fps … 5sec 15fps … and repeat
editor : Godot Engine v4.4.dev.custom_build.694d3c293
2. In old system and Metal 3.1 works the same projects OK 60fps all time
editor: Godot Engine v4.4.dev.custom_build.694d3c293
3. In editor : Godot Engine v4.3.stable.official.77dcf97d8
and Vulkan 1.2.283 all projects works OK :
- in upgraded macOS Sequoia 15
- and in old macOS 14
### Steps to reproduce
1. upgrade macOS to Sequoia 15.0 from old macOS 14
… Metal 3.1. change to Metal 3.2 in Godot 4.4 … in 4.3 is Vulkan 1.2
3. open any old project , small scene like example Jolt Physics
… in Godot 4.4 slow down cyclic for 5sec ,
### Minimal reproduction project (MRP)
Open small project like Exmaples Jolt Physics , open any your old projects. | bug,platform:macos,topic:porting,topic:thirdparty | low | Major |
2,534,784,842 | godot | Default values in PackedArrays are null instead of zero, fail to deserialize when changed to regular Array | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22621 - Vulkan (Forward+) - dedicated AMD Radeon RX 5700 XT (Advanced Micro Devices, Inc.; 31.0.24031.5001) - AMD Ryzen 9 3950X 16-Core Processor (32 Threads)
### Issue description
When adding elements to a `PackedArray` (Int, Float etc.) without modifying it the editor shows zero as expected, but the actual value saved in the .tscn is `null`. If changed to a non-zero number and back to zero, it is then written as `0` instead of `null`.
This works "correctly" for `PackedArray`s (nulls end up as zeros), however if you then change the type of the variable to a regular array i.e. `PackedInt32Array` into `Array[int]`, the values will be unchanged in the editor as expected but the .tscn still has those `null` values.
Running a scene with an `Array` with `null`s in the .tscn results in an error `assign: Unable to convert array index 0 from "Nil" to "int"` and an empty `Array` is read instead, even if there are non-null elements.
It seems that having `null` elements are simply interpreted as zeros/default values in `PackedArray`s but cause deserialization to fail when reading them as `Array`s. This causes mysterious indexing errors on ostensibly populated `Array`s that were switched over from `PackedArray`s.
### Steps to reproduce
1. Write a script with a single exported `PackedArray` (Int, Float, etc.) that is empty by default or if using the MRP skip to step 9
2. Assign this script to a node and add some items without modifying them
3. Save the scene and open it in a text editor to observe that the `PackedArray` has a bunch of `null` values in it
4. Change one of the values to non-zero in the editor (don't forget to click out of the box)
5. Change the value back to zero
6. Save the scene and open it in a text editor again to observe that the `PackedArray` now has `0` instead of `null` for the one value you changed
7. In the script, change the type from a `PackedArray` to an `Array` of the same type
8. In the editor this will seemingly keep the same values
9. Launch the scene and observe the error `assign: Unable to convert array index 0 from "Nil" to "int"` (or equivalent type you used)
10. You can also write some code to index the array/check its length to see that it's empty
### Minimal reproduction project (MRP)
[repro_97165.zip](https://github.com/user-attachments/files/17077787/repro_97165.zip)
| bug,topic:core,confirmed | low | Critical |
2,534,880,085 | kubernetes | Kubelet plugin registration reliability | I am trying to assess the reliability of the kubelet plugins registration.
I am trying it out with the Device Plugin, but the same is likely can be applied to the DRA. The issue describes some findings and concerns, I didn't perform the full review.
I started the e2e to try out things: https://github.com/kubernetes/kubernetes/pull/127304
Plugins documentation: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/pluginmanager/pluginwatcher/README.md
So the flow is:
- Kubelet looks up sockets in `/var/lib/kubelet/plugins_registry/`
- Kubelet tries to get plugin details by calling `GetInfo`
- One information received, kubelet tries to connect to the plugin
- Kubelet proceeds with getting the list of pods to run
The flow is much better in terms of race conditions comparing to the Device Plugin registration when Device Plugin need to detect when kubelet was restarted and reconnect to it.
Two reliability issues I notices immediately in tests:
1. When `GetInfo` fails, there seems to be no retries for while to get plugin details. The timeout for GetInfo is `1sec`. 1 second is a big enough number for an endpoint that simply returns the structure. But flakes are easy to imagine here.
2. When `GetDevicePluginOptions` fails, there is no retry to create a plugin for at least 30 seconds. There may be many reason for a flake when the API didn't return sucessfully for the first time. So retry is essential here.
I need to look deeper into other reliability things:
1. How fast kubelet will detect that the plugin was restarted and will attempt to reconnect? What is the mechanism for it?
2. Will there be retries on `ListAndWatch` failures?
3. Should interface implementing `GetInfo` be checking the health of the plugin? Should the best practice implementation be recreating the socket?
Similar questions can be explored for the DRA.
I think the best way to explore this is to continue working on e2e tests: https://github.com/kubernetes/kubernetes/pull/127304 demonstrating the behavior and a best practice registering the device plugin.
If we can confirm that the kubelet plugins system is almost as reliable as the today's most used device plugin registration mechanism, while eliminating the race condition on kubelet restart (https://github.com/kubernetes/kubernetes/issues/120146#issuecomment-2302666130), we should consider deprecating the `RegisterDevicePluginServer` API.
/sig node
/kind bug
CC: @ffromani @johnbelamaric | kind/bug,sig/node,priority/important-longterm,triage/accepted | low | Critical |
2,535,027,744 | material-ui | [material-ui][Select] alternative option for select multiple behavior, that item click replaces existing selection instead of adding it | ### Summary
Currently, an item click in `<Select multiple />` is handled as toggling:
* if it is already selected, remove it from the selected value array.
* if it is not selected yet, append it into the selected value array.
On the other hand, HTML native `<select multiple />` behaves like this:
* Bare click replaces already selected value, results in setting the selected value array to the singleton of the latest selected element.
* The selected item is toggled, rather than replaced, when selected by Ctrl+Click.
* There is also range selection Shift+Click (but it behaves somewhat strange when some of the items are already selected)
I want a material multi-select component except that it should behave like native multi-select, since my component expects multi-select sometimes but not very frequently. Range select(Shift+Click) is not needed. Does current MUI provide such option? If it does not, then I would like to suggest to add it.
### Examples
I skimmed the code and apparently this is the section in charge for current behavior.
https://github.com/sai6855/material-ui/blob/master/packages/mui-material/src/Select/SelectInput.js#L269
If we should add the implementation, probably adding a new boolean `nativeLike` prop and fixing the logic like below would help..?
I haven't tested if this works. I will make some PR if it seems like a good idea.
```js
if (multiple) {
if (nativeLike && !event.ctrlKey) {
newValue = [child.props.value]; // NEW: replacing instead of toggling selection
} else {
newValue = Array.isArray(value) ? value.slice() : [];
const itemIndex = value.indexOf(child.props.value);
if (itemIndex === -1) {
newValue.push(child.props.value);
} else {
newValue.splice(itemIndex, 1);
}
}
} else {
newValue = child.props.value;
}
```
### Motivation
It would be nice to have an another option for multiple select which behave more similarly to HTML native multiple select.
**Search keywords**: select multiple behavior | new feature,component: select,package: material-ui | low | Minor |
2,535,099,755 | vscode | Adopt new call stack widget for exceptions | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
[The Go Playground](https://go.dev/play/) has a feature where if an uncaught exception occurs, it will parse the stack trace from the console and highlight the offending lines inside the editor:

Having this in VS Code would make it much easier to narrow down errors at a glance, and would make most exception-related bugs easily fixable as a result. | feature-request,debug | low | Critical |
2,535,110,458 | pytorch | The results of baddbmm on CPU seem to have issues under certain conditions | ### 🐛 Describe the bug
```python
import torch
b = 64
m = 1
n = 5
k =128
tensor1 = torch.ones(b, m, k, dtype=torch.float16)
tensor2 = torch.ones(n, b, k, dtype=torch.float16)
tensor3 = torch.ones(b, m, n, dtype=torch.float16)
# tensor2 = tensor2.transpose(0, 1).transpose(1, 2)
tensor2 = tensor2.permute(1, 2, 0)
print(f'{tensor1.shape} {tensor2.shape} {tensor3.shape}')
matmul_result_0 = torch.baddbmm(
tensor3,
tensor1,
tensor2,
beta=0.0,
alpha=0.1
)
matmul_result_1 = torch.baddbmm(
tensor3,
tensor1,
tensor2,
beta=0.0,
alpha=0.2
)
matmul_result_2 = 0.3 * tensor1 @ tensor2 + tensor3 * 0
matmul_result_3 = 0.4 * tensor1 @ tensor2 + tensor3 * 0
if(torch.equal(matmul_result_3, matmul_result_2)):
print(f'@...... equal {matmul_result_2[0, 0, :10]} {matmul_result_3[0, 0, :10]}')
if torch.equal(matmul_result_1, matmul_result_0) :
print(f"equal.....{matmul_result_0[0, 0, :10]} {matmul_result_1[0, 0, :10]}")
```
I don't understand why the results of `matmul_result_0` and `matmul_result_1` are equal, but when I use `@` for the calculation, the results are not the same. On another machine with PyTorch version `2.3.1`, the results of `matmul_result_0` and `matmul_result_1` are correct as well.
### Versions
```bash
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.0 | packaged by conda-forge | (default, Nov 26 2020, 07:47:13) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-100-generic-aarch64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 4
Vendor ID: 0x48
Model: 0
Stepping: 0x1
CPU max MHz: 2600.0000
CPU min MHz: 200.0000
BogoMIPS: 200.00
L1d cache: 6 MiB
L1i cache: 6 MiB
L2 cache: 48 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @snadampal @milpuz01 | high priority,triaged,module: regression,module: correctness (silent),module: arm | low | Critical |
2,535,126,496 | ollama | Memory Allocation on VRAM when model size is bigger than the size of VRAM | ### What is the issue?
when the num_gpu=22 ,the model will be loaded into VRAM and RAM correctly,but when i change num_gpu=24,just one small step(47layers in total),it is expected to use shared VRAM to contain about 1GB datas.But it doesn't work.It will load 12GB into shared VRAM,and then shutdown due to out of memory.Is it a bug,or is it a feature because CUDA dosenot support?
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 556.13 Driver Version: 556.13 CUDA Version: 12.5 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3080 ... WDDM | 00000000:01:00.0 Off | N/A |
| N/A 42C P8 15W / 140W | 64MiB / 16384MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 14768 C+G D:\Program\QQNT\QQ.exe N/A |
+-----------------------------------------------------------------------------------------+
server.log is submitted



[server.log](https://github.com/user-attachments/files/17052524/server.log)
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10 | bug,windows,memory | low | Critical |
2,535,131,780 | flutter | [Flutter GPU] Shaders across different shader libraries can name clash. | Under the hood, we currently register Flutter GPU shaders in a single Impeller shader library using their plaintext names as keys. A simple fix for this would be to generate unique prefixes during shader library instantiation. | engine,P3,team-engine,triaged-engine,flutter-gpu | low | Minor |
2,535,135,277 | rust | Tracking Issue for generic `Atomic` | Feature gate: `#![feature(generic_atomic)]`
This is a tracking issue for replacing the distinct `Atomic*` types with a generic `Atomic<T>` type. This allows using `Atomic` with FFI type aliases and helps clean up some API surface. Only types with existing `AtomicT` are usable in `Atomic<T>`.
### Public API
```rust
// core::sync::atomic
pub struct Atomic<T: AtomicPrimitive>(/* private fields*/);
pub type AtomicI32 = Atomic<i32>; // etc
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] ACP: https://github.com/rust-lang/libs-team/issues/443
- [ ] Implement `Atomic<T>` as an alias to `AtomicT`: https://github.com/rust-lang/rust/pull/130543
- [ ] Flip alias so `AtomicT` is an alias to `Atomic<T>`
- [ ] Move generic functionality from `AtomicT` to `Atomic<_>`
- [ ] Pseudo-prerequisite: Stabilize 128-bit atomics: https://github.com/rust-lang/rust/issues/99069
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- `Atomic<T>` is soft-blocked on 128-bit atomics, because since trait implementations cannot be unstable, gating `Atomic<i128>` separately from `Atomic<i32>` isn't possible.
- If necessary, `AtomicI128` could instead name `Atomic<Wrapper<i128>>` for some unstable name `Wrapper` until 128-bit atomics are stable, to prevent `Atomic<i128>` from being usable from stable earlier than intended.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,535,141,631 | node | Support `net.BlockList` in `http.Agent` options | ### What is the problem this feature will solve?
Enable users to allow or block requests when using `http.request`, `fetch`, etc.
### What is the feature you are proposing to solve the problem?
Node.js already supports `net.BlockList`, it'd be awesome if you could simply pass in an instance of `net.BlockList` when creating a custom `http.Agent` and then have it automatically enforce the IP checks for you.
### What alternatives have you considered?
I believe the only way to do this at the moment is a bit boilerplate-y, which would be using a custom `lookup` function that calls `dns.lookup(hostname)` manually, then calls `blocklist.check(address)` manually, and then if it flags, throw an error, else return the address.
Although that still isn't a complete solution because the `lookup` function isn't called for hostnames that are already IP addresses, so even more code to do the check fully :( | net,feature request | low | Critical |
2,535,144,070 | godot | [Physics Interpolation] Interpolated triggered from outside physics process when using multi-threaded rendering. | ### Tested versions
- Reproducible in: 3.6.stable, 3.5.3.stable.
Cannot reproduce in earlier versions, as in-built 3D physics interpolation was introduced in 3.5.
### System information
Linux Mint 22
### Issue description
Specifically when using the multi-threaded rendering model while physics interpolation is enabled, every so often this warning is generated while rigid bodies are in motion:
```
W 0:00:15.011 instance_set_transform: [Physics interpolation] Interpolated triggered from outside physics process: "/root/Scene/@Cube@2/CollisionShape/MeshInstance" (possibly benign).
<C++ Source> servers/visual/visual_server_scene.cpp:892 @ instance_set_transform()
```
The rigid body that generates the warning is random, and I'm pretty sure the warnings are actually rate-limited in 3.6, since in 3.5, the warnings are essentially spammed out.
I would like to keep using the multi-threaded rendering model for my project, as I need to instance meshes in a separate thread, but this warning is somewhat annoying. Is there a nice way to solve this issue, or is my only option to hide the warnings?
### Steps to reproduce
1. Open the reproduction project.
2. Run the scene, and let physics do its thing for about 30 seconds. Since by default Godot uses single-threaded rendering, there should be no warnings generated.
3. Stop running the scene, enter the project settings, and under Rendering > Threads > Thread Model, change to "Multi-Threaded".
4. Run the scene again, and wait about 30 seconds again. This time, the warning should appear at regular-ish intervals.
### Minimal reproduction project (MRP)
[InterpolationWarning.zip](https://github.com/user-attachments/files/17052605/InterpolationWarning.zip)
| bug,documentation,topic:physics | low | Major |
2,535,149,334 | ollama | qwen2.5 context length | ### What is the issue?
<img width="674" alt="image" src="https://github.com/user-attachments/assets/03949cc7-07fd-45c4-a09a-4a971e0a3586">
According to the model card, the context length should be **128k**?
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.3.10 | bug | low | Minor |
2,535,149,806 | vscode | Git - vscode does not respect GIT_AUTHOR_NAME/EMAIL in pre-commit hook failures, setting git.requireGitUserConfig=false has no effect | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: `Version: 1.92.1 (Universal)`
- OS Version: Mac OS Sonoma 14.6.1
Steps to Reproduce:
1. add a pre-commit hook that exits 1 (for whatever reason)
2. do not set git config for user.name or user.email
3. set configs using [GIT_AUTHOR_NAME / GIT_AUTHOR_EMAIL](https://git-scm.com/book/en/v2/Git-Internals-Environment-Variables)
4. Try to commit using the source control pane
5. See error about user.name/email despite being set
<img width="555" alt="image" src="https://github.com/user-attachments/assets/ee99c44b-972e-4e74-a4a7-184601c939b3">
Notes:
- This only fails if a pre-commit hook fails (which is valid!)
- If the configs are set directly using git config, you get the output of the pre-commit hook instead of the screenshot above
- `git.requireGitUserConfig` is set to false
- The issue is in these lines of code https://github.com/microsoft/vscode/blob/07e6c39831b3ac1cabc1c228502a48788c4244c8/extensions/git/src/git.ts#L1743-L1754 which make a false assumption about how these are set
- Similar yet separate issues:
- https://github.com/microsoft/vscode/issues/128704
- https://github.com/microsoft/vscode/issues/173442 | bug,git | low | Critical |
2,535,185,981 | kubernetes | A DaemonSet pod environment variable did not inject service information | ### What happened?
I have a k8s cluster and created a DaemonSet in the cluster that is associated with three pods. I found that one of the pods did not inject service information into its environment variables
services:
```shell
[root@controller-0-2:/k8s]$ kubectl get svc -n admin
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
assem-apiserver ClusterIP fd00::5be3 <none> 2509/TCP 23h
assem-apiserver-cluster ClusterIP fd00::e0b3 <none> 2509/TCP 23h
```
The problematic pod environment variable information is as follows:
```shell
[root@controller-0-2:/k8s]$ kubectl exec -it -n admin assem-cic-kbbb8 -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=controller-0-2
ENV=/etc/profile
POD_NAMESPACE=admin
POD_NAME=assem-cic-kbbb8
KUBERNETES_PORT=tcp://[fd00::1]:443
KUBERNETES_PORT_443_TCP=tcp://[fd00::1]:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=fd00::1
KUBERNETES_SERVICE_HOST=fd00::1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
TERM=xterm
```
The normal pod environment variables are as follows:
```shell
[root@controller-0-0:/k8s]$ kubectl exec -it -n admin assem-cic-jbzbh -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=-controller-0-0
ENV=/etc/profile
POD_NAME=op-assem-cic-jbzbh
POD_NAMESPACE=admin
ASSEM_APISERVER_CLUSTER_PORT_2509_TCP=tcp://[fd00::e0b3]:2509
ASSEM_APISERVER_PORT_2509_TCP_PORT=2509
KUBERNETES_SERVICE_HOST=fd00::1
ASSEMAPISERVER_CLUSTER_SERVICE_PORT_HTTPS=2509
ASSEM_APISERVER_CLUSTER_PORT=tcp://[fd00::e0b3]:2509
ASSEM_APISERVER_CLUSTER_PORT_2509_TCP_PROTO=tcp
ASSEM_APISERVER_CLUSTER_PORT_2509_TCP_PORT=2509
ASSEM_APISERVER_SERVICE_PORT_HTTPS=2509
ASSEM_APISERVER_PORT_2509_TCP_PROTO=tcp
ASSEM_APISERVER_PORT_2509_TCP_ADDR=fd00::5be3
ASSEM_APISERVER_CLUSTER_SERVICE_HOST=fd00::e0b3
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
ASSEM_APISERVER_CLUSTER_SERVICE_PORT=2509
ASSEM_APISERVER_PORT=tcp://[fd00::5be3]:2509
ASSEM_APISERVER_CLUSTER_PORT_2509_TCP_ADDR=fd00::e0b3
ASSEM_APISERVER_SERVICE_PORT=2509
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://[fd00::1]:443
ASSEM_APISERVER_PDM_SERVICE_PORT=2509
ASSEM_APISERVER_SERVICE_HOST=fd00::5be3
ASSEM_APISERVER_PORT_2509_TCP=tcp://[fd00::5be3]:2509
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://[fd00::1]:443
KUBERNETES_PORT_443_TCP_ADDR=fd00::1
TERM=xterm
```
`assem-apiserver` service creation time:
```shell
creationTimestamp: "2024-09-18T03:37:28Z"
```
`assem-apiserver-cluster` service creation time:
```shell
creationTimestamp: "2024-09-18T03:37:28Z"
```
`assem-cic-kbbb8 ` pod creation time:
```yaml
[root@controller-0-2:/k8s]$ kubectl get po -n admin assem-cic-kbbb8 -oyaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2024-09-18T03:37:28Z"
...
namespace: admin
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: assem-cic
uid: 0c5601ca-d6ab-4db7-8d8b-e18321e3e741
resourceVersion: "2092"
uid: 4b96702a-c0ff-4a6e-9510-2ec0f467ceee
spec:
...
dnsPolicy: ClusterFirstWithHostNet
enableServiceLinks: true
hostNetwork: true
...
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-09-18T03:37:31Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2024-09-18T03:38:46Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2024-09-18T03:38:46Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2024-09-18T03:37:28Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://9529e9564e71706ada80b6280e16537418e35fe61aa5e013d4fbc90c29f113c5
...
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2024-09-18T03:37:32Z"
hostIP: 193:116:66::19
initContainerStatuses:
- containerID: containerd://69507fe03ce358b168be50336dc6361b2aa1581ad0bae1bec4d3b3da62d7b90a
...
restartCount: 0
started: false
state:
terminated:
containerID: containerd://69507fe03ce358b168be50336dc6361b2aa1581ad0bae1bec4d3b3da62d7b90a
exitCode: 0
finishedAt: "2024-09-18T03:37:30Z"
reason: Completed
startedAt: "2024-09-18T03:37:30Z"
phase: Running
podIP: 193:116:66::19
podIPs:
- ip: 193:116:66::19
- ip: 192.0.0.40
qosClass: Burstable
startTime: "2024-09-18T03:37:28Z"
```
` assem-cic-jbzbh ` pod creation time:
```yaml
[root@controller-0-2:/k8s]$ kubectl get po -n admin assem-cic-jbzbh -oyaml
apiVersion: v1
kind: Pod
metadata:
...
creationTimestamp: "2024-09-18T03:37:28Z"
generateName: assem-cic-
...
namespace: admin
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: assem-cic
uid: 0c5601ca-d6ab-4db7-8d8b-e18321e3e741
resourceVersion: "2138"
uid: f29089eb-f11e-4994-9b37-d87d50da41f9
spec:
...
dnsPolicy: ClusterFirstWithHostNet
enableServiceLinks: true
hostNetwork: true
...
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-09-18T03:37:31Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2024-09-18T03:38:50Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2024-09-18T03:38:50Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2024-09-18T03:37:28Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://146868dac3e14064117e31f4497ae6d7f19d3f0b876b0710945d039ff2baf404
...
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2024-09-18T03:37:31Z"
hostIP: 193:116:66::2c
initContainerStatuses:
- containerID: containerd://d6829037467552e02fb4688872fdd5d6b19087a8320b2c03db75a9d2703231f0
...
restartCount: 0
started: false
state:
terminated:
containerID: containerd://d6829037467552e02fb4688872fdd5d6b19087a8320b2c03db75a9d2703231f0
exitCode: 0
finishedAt: "2024-09-18T03:37:31Z"
reason: Completed
startedAt: "2024-09-18T03:37:31Z"
phase: Running
podIP: 193:116:66::2c
podIPs:
- ip: 193:116:66::2c
- ip: 192.0.0.45
qosClass: Burstable
startTime: "2024-09-18T03:37:28Z"
```
### What did you expect to happen?
Service information can also be injected into the pod `assem-cic-kbbb8` environment variable.
### How can we reproduce it (as minimally and precisely as possible)?
deploy svc and pod
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
1.28.3
```
</details>
### Cloud provider
<details>
none
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/apps,lifecycle/rotten,needs-triage | low | Major |
2,535,220,401 | go | runtime: TestGdbAutotmpTypes failures | ```
#!watchflakes
default <- pkg == "runtime" && test == "TestGdbAutotmpTypes"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8736426478682440353)):
=== RUN TestGdbAutotmpTypes
=== PAUSE TestGdbAutotmpTypes
=== CONT TestGdbAutotmpTypes
runtime-gdb_test.go:78: gdb version 15.0
runtime-gdb_test.go:569: gdb output:
Loading Go Runtime support.
Target 'exec' cannot support this command.
Breakpoint 1 at 0x6f720: file /home/swarming/.swarming/w/ir/x/t/TestGdbAutotmpTypes2165650426/001/main.go, line 9.
[New LWP 1144577]
[New LWP 1144578]
...
File runtime:
[]main.astruct
bucket<string,main.astruct>
hash<string,main.astruct>
main.astruct
typedef hash<string,main.astruct> * map[string]main.astruct;
typedef noalg.[8]main.astruct noalg.[8]main.astruct;
noalg.map.bucket[string]main.astruct
runtime-gdb_test.go:586: could not find []main.astruct; in 'info typrs astruct' output
--- FAIL: TestGdbAutotmpTypes (53.89s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,NeedsInvestigation,compiler/runtime | medium | Critical |
2,535,368,605 | kubernetes | topologySpreadConstraints for availability zone in aws is not working as expected | ### What happened?
Stateful set has 3az spread
```
│ topologySpreadConstraints: │
│ - labelSelector: │
│ matchLabels: │
│ app: myApp │
│ component: myComponent │
│ id: app-65 │
│ app-id: app-65 │
│ maxSkew: 1 │
│ topologyKey: topology.kubernetes.io/zone │
│ whenUnsatisfiable: DoNotSchedule
```
### What did you expect to happen?
It supposed to have 3 replica with 1 in each az but it ended up all 3 replica in one az
### How can we reproduce it (as minimally and precisely as possible)?
Able to reproduce 2 times but not always
### Anything else we need to know?
there were capacity issues in other azs but it should result it not scheduling pods there
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
Server Version: v1.28.12-eks-2f46c53
WARNING: version difference between client (1.31) and server (1.28) exceeds the supported minor version skew of +/-1
```
</details>
### Cloud provider
<details>
AWS
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,lifecycle/rotten,needs-triage | low | Minor |
2,535,376,378 | ant-design | Table组件,合并单元格,fixed两列之后,左右滑动合并的单元格不会固定。 | ### Reproduction link
[](https://stackblitz.com/edit/react-bwlbbn?file=demo.tsx,package.json)
### Steps to reproduce
const columns: TableProps<DataType>['columns'] = [
{
title: 'Name',
dataIndex: 'name',
key: 'name',
fixed: 'left',
onCell: (o, i) => {
if (i === 0) {
return { colSpan: 8 };
}
return { colSpan: 1 };
},
},
{
title: 'Age',
dataIndex: 'age',
key: 'age',
fixed: 'left',
onCell: (o, i) => {
if (i === 0) {
return { colSpan: 0 };
}
return { colSpan: 1 };
},
},
{
title: 'Address',
dataIndex: 'address',
key: 'address',
onCell: (o, i) => {
if (i === 0) {
return { colSpan: 0 };
}
return { colSpan: 1 };
},
},
{
title: 'Tags',
key: 'tags',
dataIndex: 'tags',
onCell: (o, i) => {
if (i === 0) {
return { colSpan: 0 };
}
return { colSpan: 1 };
},
},
{
title: 'Action',
key: 'action',
onCell: (o, i) => {
if (i === 0) {
return { colSpan: 0 };
}
return { colSpan: 1 };
},
},
{
title: '杀马特',
dataIndex: '112',
onCell: (o, i) => {
if (i === 0) {
return { colSpan: 0 };
}
return { colSpan: 1 };
},
},
{
title: '杀马特',
dataIndex: '1124',
onCell: (o, i) => {
if (i === 0) {
return { colSpan: 0 };
}
return { colSpan: 1 };
},
},
{
title: '杀马特',
dataIndex: '11242',
onCell: (o, i) => {
if (i === 0) {
return { colSpan: 0 };
}
return { colSpan: 1 };
},
},
{
title: '杀马特',
dataIndex: '11249',
onCell: (o, i) => {
if (i === 0) {
return { colSpan: 0 };
}
return { colSpan: 1 };
},
},
];
### What is expected?
左右滑动,合并的一列,不会动
### What is actually happening?
左右滑动,合并的一列,会滑动
| Environment | Info |
| --- | --- |
| antd | 5.20.6 |
| React | 18.3.1 |
| System | mac os 14.6.1 |
| Browser | 版本 128.0.6613.138(正式版本) (arm64) |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,help wanted,Inactive | low | Minor |
2,535,389,744 | PowerToys | Workspaces: Capture not translated | ### Microsoft PowerToys version
0.84.0
### Utility with translation issue
Workspaces
### 🌐 Language affected
German
### ❌ Actual phrase(s)
Capture
### ✔️ Expected phrase(s)
Erfassen
### ℹ Why is the current translation wrong
Button still contains text "Capture", but should show "Erfassen", like in the text of the explaination.
 | Issue-Bug,Area-Localization,Needs-Triage,Needs-Team-Response,Issue-Translation,Product-Workspaces | low | Major |
2,535,447,910 | vscode | `xdg-open <folder>` opens folder in VSCode instead of file browser since Ubuntu 24 |
Type: <b>Bug</b>
I recently upgraded from Ubuntu 22 to 24. Since then, trying to open a folder in the file explorer from VSCode's integrated terminal using `xdg-open /path/to/folder` results in another VSCode window opening for the folder.
If I type the same in the system terminal, it works as expected.
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Linux x64 6.8.0-44-generic snap
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz (8 x 4298)|
|GPU Status|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: enabled_on<br>opengl: disabled_off<br>rasterization: disabled_software<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: disabled_software<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: unavailable_software<br>webgl2: unavailable_software<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|1, 1, 1|
|Memory (System)|31.06GB (21.14GB free)|
|Process Argv|--no-sandbox --force-user-env --crash-reporter-id 23d56de3-001d-431f-81e8-0c352e7ea82f|
|Screen Reader|no|
|VM|14%|
|DESKTOP_SESSION|ubuntu|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu|
|XDG_SESSION_TYPE|x11|
</details><details><summary>Extensions (92)</summary>
Extension|Author (truncated)|Version
---|---|---
toggle-excluded-files|amo|2.0.0
tsl-problem-matcher|amo|0.6.2
vscode-zipfs|arc|3.0.0
chronicler|arc|0.1.16
search-crates-io|bel|1.2.1
lit-html|bie|1.11.1
vscode-tailwindcss|bra|0.12.10
vscode-diff-viewer|cap|1.5.0
ruff|cha|2024.48.0
css-theme-completions|con|0.0.5
esbuild-problem-matchers|con|0.0.3
vscode-eslint|dba|3.0.10
dprint|dpr|0.16.3
gitlens|eam|15.5.1
EditorConfig|Edi|0.16.4
RunOnSave|eme|0.2.0
json-tools|eri|1.0.2
prettier-vscode|esb|11.0.0
codespaces|Git|1.17.3
copilot|Git|1.229.0
copilot-chat|Git|0.20.1
vscode-github-actions|git|0.26.5
vscode-pull-request-github|Git|0.88.1
vscode-graphql-syntax|Gra|1.3.6
vscode-mocha-test-adapter|hbe|2.14.1
vscode-test-explorer|hbe|2.21.1
rest-client|hum|0.25.1
cortex-debug|mar|1.12.1
debug-tracker-vscode|mcu|0.0.15
memory-view|mcu|0.0.25
peripheral-viewer|mcu|1.4.6
rtos-views|mcu|0.0.7
template-string-converter|meg|0.6.1
git-graph|mhu|1.30.0
compare-folders|mos|0.24.3
vscode-json5|mrm|1.0.0
vscode-azureresourcegroups|ms-|0.9.5
vscode-docker|ms-|1.29.2
vscode-language-pack-de|MS-|1.93.2024091109
vscode-dotnet-runtime|ms-|2.1.5
debugpy|ms-|2024.10.0
pylint|ms-|2023.10.1
python|ms-|2024.14.1
vscode-pylance|ms-|2024.9.1
jupyter|ms-|2024.8.1
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.384.0
remote-ssh|ms-|0.114.3
remote-ssh-edit|ms-|0.86.0
remote-wsl|ms-|0.88.3
vscode-remote-extensionpack|ms-|0.25.0
azure-account|ms-|0.12.0
cmake-tools|ms-|1.19.51
cpptools|ms-|1.21.6
cpptools-extension-pack|ms-|1.3.0
hexeditor|ms-|1.10.0
live-server|ms-|0.4.15
makefile-tools|ms-|0.11.13
powershell|ms-|2024.2.2
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
test-adapter-converter|ms-|0.1.9
vscode-js-profile-flame|ms-|1.0.9
vscode-serial-monitor|ms-|0.13.1
hide-gitignored|npx|1.1.0
vetur|oct|0.37.3
vscode-twoslash-queries|Ort|1.2.2
vscode-versionlens|pfl|1.14.2
typescript-mono-repo-import-helper|q|0.0.6
tsserver-live-reload|rbu|1.0.1
java|red|1.34.0
vscode-sort-json|ric|1.20.0
lit-plugin|run|1.4.3
rust-analyzer|rus|0.3.2011
gitconfig|sid|2.0.1
even-better-toml|tam|0.19.2
cmake|twx|0.0.17
sort-lines|Tyr|1.11.0
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
explorer|vit|1.2.8
vscode-arduino|vsc|0.7.1
vscode-gradle|vsc|3.16.4
vscode-java-dependency|vsc|0.24.0
vscode-java-pack|vsc|0.29.0
vscode-maven|vsc|0.44.0
vscode-icons|vsc|12.9.0
vscode-todo-highlight|way|1.0.5
config-editor|zwa|0.0.15
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
9c06g630:31013171
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31134774
wkspc-onlycs-t:31132770
wkspc-ranged-t:31125599
defaultse:31133495
ei213698:31121563
```
</details>
<!-- generated by issue reporter --> | bug,snap,confirmation-pending | low | Critical |
2,535,499,333 | pytorch | Numpy related failure of gradient and sobolengine distribution testcases on POWER and x86 | ### 🐛 Describe the bug
I am seeing failures of testcases prsent in test/test_torch.py when run on either POWER or on x86 machine
Steps to reproduce:
- clone build and install pytorch
- `python test/test_torch.py`
o/p msg at end:
Ran 1021 tests in 47.036s
FAILED (failures=5, errors=2, skipped=53)
FAIL: test_sobolengine_distribution (__main__.TestTorch.test_sobolengine_distribution)
FAIL: test_sobolengine_distribution_scrambled (__main__.TestTorch.test_sobolengine_distribution_scrambled)
FAIL: test_gradient_all_cpu_complex64 (__main__.TestTorchDeviceTypeCPU.test_gradient_all_cpu_complex64)
FAIL: test_gradient_all_cpu_float32 (__main__.TestTorchDeviceTypeCPU.test_gradient_all_cpu_float32)
FAIL: test_gradient_all_cpu_int64 (__main__.TestTorchDeviceTypeCPU.test_gradient_all_cpu_int64)
Peculiar thing about these failures are, they are failing on POWER when the python version is 3.12 and numpy version is greater than 1.26, for numpy 1.26 and below above failures do not reproduce, moreover if we run pytorch on python version 3.10, then also above failures do not reproduce (regardless of version of numpy)
For x86, version of python (3.10 or 3.12) is not of relevance, it passes if numpy version <=1.26 and fails if ver >1.26.4
### Versions
Collecting environment information...
NUMA node2 CPU(s): 192-287
NUMA node3 CPU(s): 288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1: Mitigation; __user pointer sanitization, ori31 speculation barrier enabled
Vulnerability Spectre v2: Mitigation; Software count cache flush (hardware accelerated), Software link stack flush
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] torch==2.5.0a0+gitc0d2f99
[conda] nomkl 3.0 0 https://ausgsa.ibm.com:7191/gsa/ausgsa/projects/o/open-ce/conda/Open-CE-r1.11/1.11.0/opence-p10
[conda] numpy 2.1.1 py312h8c5cf51_0 conda-forge
[conda] torch 2.5.0a0+gitc0d2f99 pypi_0 pypi
cc @mruberry @rgommers | triaged,module: numpy,module: POWER | low | Critical |
2,535,518,280 | ui | [bug]: Can't open DropdownMenu when using with AlertDialog together | ### Describe the bug
```tsx
export default function Test() {
const [open, setOpen] = useState(false);
return (
<div className="p-20">
<DropdownMenu>
<DropdownMenuTrigger>Open</DropdownMenuTrigger>
<DropdownMenuContent>
<DropdownMenuLabel>My Account</DropdownMenuLabel>
<DropdownMenuSeparator />
<DropdownMenuItem>Profile</DropdownMenuItem>
<DropdownMenuItem>Billing</DropdownMenuItem>
<DropdownMenuItem>Team</DropdownMenuItem>
<DropdownMenuItem>Subscription</DropdownMenuItem>
<Button onClick={() => setOpen(true)}>open dialog</Button> // open dialog
</DropdownMenuContent>
</DropdownMenu>
<AlertDialog open={open}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>Are you absolutely sure?</AlertDialogTitle>
<AlertDialogDescription>
This action cannot be undone. This will permanently delete your
account and remove your data from our servers.
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel onClick={() => setOpen(false)}> // close dialog
Cancel
</AlertDialogCancel>
<AlertDialogAction>Continue</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
</div>
);
}
```
**Can't open DropdownMenu again after closing AlertDialog**
### Affected component/components
DropdownMenu, AlertDialog
### How to reproduce
---
### Codesandbox/StackBlitz link
https://codesandbox.io/p/sandbox/shadcnui-issue-44hxn3
### Logs
_No response_
### System Info
```bash
---
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,535,541,278 | rust | Creating an immutable temporary object that `Copy`s a mutable reference contained by an immutable object could (?) but doesn't work | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
struct Thing<'a> {
mut_ref_value: &'a mut u8
}
fn func<'a>(thing: &'a Thing<'a>) {
func(&Thing {mut_ref_value: thing.mut_ref_value})
}
fn main() {
println!("Hello, world!");
}
```
I expected to see this happen: For the code to compile (albeit not do anything useful). Because the `Thing` constructed in `Func` is an immutable temporary, the fact that `thing` is immutable shouldn't restrict using its `&mut u8` since it can only ever be used as a `&u8`. I am not entirely certain that reasoning checks out but I'm having a hard time thinking of a way it isn't.
It breaks the "don't have two mutable references to the same thing" rule but I don't think breaking it here could cause issues.
The code I ran into this in uses `thing.mut_ref_value` to create an actually different version of `Thing` so this isn't just a random edge case nobody'd ever run into.
Instead, this happened:
```
error[E0596]: cannot borrow `*thing.mut_ref_value` as mutable, as it is behind a `&` reference
--> src/main.rs:6:33
|
6 | func(&Thing {mut_ref_value: thing.mut_ref_value})
| ^^^^^^^^^^^^^^^^^^^ `thing` is a `&` reference, so the data it refers to cannot be borrowed as mutable
|
help: consider changing this to be a mutable reference
|
5 | fn func<'a>(thing: &'a mut Thing<'a>) {
| +++
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
N/A
```
</p>
</details>
| T-lang,T-compiler | low | Critical |
2,535,563,404 | vscode | Cannot read properties of undefined (reading 'isVisible') | ```javascript
TypeError: Cannot read properties of undefined (reading 'isVisible')
at b.convertModelPositionToViewPosition in src/vs/editor/common/viewModel/viewModelLines.ts:846:66
at a.convertModelPositionToViewPosition in src/vs/editor/common/viewModel/viewModelLines.ts:1086:22
at C.g in src/vs/editor/common/cursor/oneCursor.ts:148:61
at C.setState in src/vs/editor/common/cursor/oneCursor.ts:81:8
at $.setStates in src/vs/editor/common/cursor/cursorCollection.ts:109:19
at d.setStates in src/vs/editor/common/cursor/cursor.ts:127:17
at d.setSelections in src/vs/editor/common/cursor/cursor.ts:309:8
at <anonymous> in src/vs/editor/common/viewModel/viewModelImpl.ts:1029:65
at callback in src/vs/editor/common/viewModel/viewModelImpl.ts:1109:12
at cb in src/vs/editor/browser/widget/codeEditor/codeEditorWidget.ts:1656:14
at D.U in src/vs/editor/common/viewModel/viewModelImpl.ts:1106:36
at D.setSelections in src/vs/editor/common/viewModel/viewModelImpl.ts:1029:8
at V.setSelections in src/vs/editor/browser/widget/codeEditor/codeEditorWidget.ts:899:29
at a.setSelections in src/vs/workbench/contrib/notebook/browser/viewModel/baseCellViewModel.ts:531:23
at n in src/vs/workbench/contrib/notebook/browser/view/notebookCellEditorPool.ts:96:11
at _update in src/vs/workbench/contrib/notebook/browser/view/notebookCellEditorPool.ts:105:5
at l.B in src/vs/base/common/event.ts:1242:13
at l.C in src/vs/base/common/event.ts:1253:9
at l.fire in src/vs/base/common/event.ts:1277:9
at d.value in src/vs/editor/browser/widget/codeEditor/codeEditorWidget.ts:1747:36
at l.B in src/vs/base/common/event.ts:1242:13
at l.fire in src/vs/base/common/event.ts:1273:9
at v.s in src/vs/editor/common/viewModelEventDispatcher.ts:64:18
at v.endEmitViewEvents in src/vs/editor/common/viewModelEventDispatcher.ts:109:8
at <anonymous> in src/vs/editor/common/viewModel/viewModelImpl.ts:412:27
at listener in src/vs/editor/common/model/textModel.ts:236:38
at l.B in src/vs/base/common/event.ts:1242:13
at l.C in src/vs/base/common/event.ts:1253:9
at l.fire in src/vs/base/common/event.ts:1277:9
at oe.endDeferredEmit in src/vs/editor/common/model/textModel.ts:2512:23
at K.pushEditOperations in src/vs/editor/common/model/textModel.ts:1278:23
at o.c in src/vs/editor/common/cursor/cursor.ts:796:35
at o.executeCommands in src/vs/editor/common/cursor/cursor.ts:754:23
at d.G in src/vs/editor/common/cursor/cursor.ts:360:34
at <anonymous> in src/vs/editor/common/cursor/cursor.ts:565:11
at callback in src/vs/editor/common/cursor/cursor.ts:520:4
at d.type in src/vs/editor/common/cursor/cursor.ts:554:8
at <anonymous> in src/vs/editor/common/viewModel/viewModelImpl.ts:1056:59
at callback in src/vs/editor/common/viewModel/viewModelImpl.ts:1109:12
at cb in src/vs/editor/browser/widget/codeEditor/codeEditorWidget.ts:1656:14
at D.U in src/vs/editor/common/viewModel/viewModelImpl.ts:1106:36
at D.S in src/vs/editor/common/viewModel/viewModelImpl.ts:1044:8
at D.type in src/vs/editor/common/viewModel/viewModelImpl.ts:1056:8
at V.Vb in src/vs/editor/browser/widget/codeEditor/codeEditorWidget.ts:1131:29
at V.trigger in src/vs/editor/browser/widget/codeEditor/codeEditorWidget.ts:1061:11
at F.runCommand in src/vs/editor/browser/coreCommands.ts:2144:10
at handler in src/vs/editor/browser/editorExtensions.ts:155:38
at actualHandler in src/vs/platform/commands/common/commands.ts:98:12
at fn in src/vs/platform/instantiation/common/instantiationService.ts:109:11
at w.n in src/vs/workbench/services/commands/common/commandService.ts:95:46
at w.executeCommand in src/vs/workbench/services/commands/common/commandService.ts:60:17
at Object.type in src/vs/editor/browser/widget/codeEditor/codeEditorWidget.ts:1819:27
at k.type in src/vs/editor/browser/view/viewController.ts:72:24
at d.value in src/vs/editor/browser/controller/textAreaHandler.ts:350:26
at l.B in src/vs/base/common/event.ts:1242:13
at l.fire in src/vs/base/common/event.ts:1273:9
at d.value in src/vs/editor/browser/controller/textAreaInput.ts:396:18
at l.B in src/vs/base/common/event.ts:1242:13
at l.C in src/vs/base/common/event.ts:1253:9
at l.fire in src/vs/base/common/event.ts:1277:9
at HTMLTextAreaElement.S in src/vs/base/browser/event.ts:40:41
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=4849ca9bdf9666755eb463db297b69e5385090e3&bH=b7bf7955-1719-1f96-4e9a-3a2001379400) | error-telemetry | low | Critical |
2,535,564,568 | vscode | Cannot read properties of undefined (reading 'length') | ```javascript
TypeError: Cannot read properties of undefined (reading 'length')
at <anonymous> in src/vs/workbench/contrib/extensions/browser/extensionsViewlet.ts:750:49
at async Promise.all (index 1)
at async D in src/vs/workbench/services/progress/browser/progressService.ts:56:12
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=4849ca9bdf9666755eb463db297b69e5385090e3&bH=da784510-f1a8-44ea-e4ee-7f28e8984cbe) | error-telemetry | low | Critical |
2,535,566,528 | vscode | Assertion Failed: argument is undefined or null | ```javascript
Error: Assertion Failed: argument is undefined or null
at b in src/vs/base/common/types.ts:99:9
at de.ec in out/vs/workbench/workbench.desktop.main.js:2670:70132
at de.layout in src/vs/workbench/contrib/welcomeGettingStarted/browser/gettingStarted.ts:1107:9
at <anonymous> in src/vs/workbench/browser/parts/editor/editorPanes.ts:490:46
at fn in src/vs/workbench/browser/parts/editor/editorPanes.ts:507:4
at o.layout in src/vs/workbench/browser/parts/editor/editorPanes.ts:490:8
at ne.layout in src/vs/workbench/browser/parts/editor/editorGroupView.ts:2196:19
at ne.relayout in src/vs/workbench/browser/parts/editor/editorGroupView.ts:2202:9
at d.value in src/vs/workbench/browser/parts/editor/editorTitleControl.ts:99:111
at l.B in src/vs/base/common/event.ts:1242:13
at l.fire in src/vs/base/common/event.ts:1273:9
at d.value in src/vs/workbench/browser/parts/editor/breadcrumbsControl.ts:587:96
at l.B in src/vs/base/common/event.ts:1242:13
at l.fire in src/vs/base/common/event.ts:1273:9
at B.hide in src/vs/workbench/browser/parts/editor/breadcrumbsControl.ts:270:32
at B.update in src/vs/workbench/browser/parts/editor/breadcrumbsControl.ts:301:10
at g.D in src/vs/workbench/browser/parts/editor/editorTitleControl.ts:118:29
at g.openEditor in src/vs/workbench/browser/parts/editor/editorTitleControl.ts:107:8
at ne.Kb in src/vs/workbench/browser/parts/editor/editorGroupView.ts:1309:22
at ne.Jb in src/vs/workbench/browser/parts/editor/editorGroupView.ts:1259:33
at ne.openEditor in src/vs/workbench/browser/parts/editor/editorGroupView.ts:1163:15
at se in src/vs/workbench/browser/parts/editor/multiEditorTabsControl.ts:916:27
at handleClickOrTouch in src/vs/workbench/browser/parts/editor/multiEditorTabsControl.ts:931:73
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=4849ca9bdf9666755eb463db297b69e5385090e3&bH=f405a8d0-9e27-b78c-b4f2-2ad0e6dafbc2) | error-telemetry | low | Critical |
2,535,567,392 | vscode | Cannot read properties of undefined (reading 'onEvents') | ```javascript
TypeError: Cannot read properties of undefined (reading 'onEvents')
at i.$acceptModelChanged in src/vs/workbench/services/textMate/browser/backgroundTokenization/worker/textMateTokenizationWorker.worker.ts:126:32
at f.g in src/vs/base/common/worker/simpleWorker.ts:510:59
at Object.handleMessage in src/vs/base/common/worker/simpleWorker.ts:487:88
at _.k in src/vs/base/common/worker/simpleWorker.ts:245:32
at _.h in src/vs/base/common/worker/simpleWorker.ts:209:17
at _.handleMessage in src/vs/base/common/worker/simpleWorker.ts:178:8
at f.onmessage in src/vs/base/common/worker/simpleWorker.ts:493:18
at self.onmessage in src/vs/base/worker/workerMain.ts:147:57
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=4849ca9bdf9666755eb463db297b69e5385090e3&bH=a104beaa-3d36-bab8-940f-84c8d216e232) | error-telemetry | low | Critical |
2,535,595,958 | pytorch | Python 3.10 + intel-openmp failed to use numactl after import torch._C | ### 🐛 Describe the bug
Insert debug code in torch._init_.py
```python
366 if USE_GLOBAL_DEPS:
367 _load_global_deps()
368 import os
369 print("Before import torch._C")
370 os.system("numactl -C 1 ls")
371 from torch._C import * # noqa: F403
372 print("After import torch._C")
373 os.system("numactl -C 1 ls")
```
How to reproduce:
```shell
LD_PRELOAD=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}/lib/libiomp5.so KMP_AFFINITY=granularity=fine,compact,1,0 python -c "import torch"
```
**Print:**
```shell
pytorch_3.10) [root@d2a4b224fd20 workspace]# LD_PRELOAD=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}/lib/libiomp5.so KMP_AFFINITY=granularity=fine,compact,1,0 python -c "import torch"
Before import torch._C
DeepSpeed log oneCCL test.py torch-ccl vision whls whls.zip
After import torch._C
libnuma: Warning: cpu argument 1 is out of range
<1> is invalid
usage: numactl [--all | -a] [--interleave= | -i <nodes>] [--preferred= | -p <node>]
[--physcpubind= | -C <cpus>] [--cpunodebind= | -N <nodes>]
[--membind= | -m <nodes>] [--localalloc | -l] command args ...
numactl [--show | -s]
numactl [--hardware | -H]
numactl [--length | -l <length>] [--offset | -o <offset>] [--shmmode | -M <shmmode>]
[--strict | -t]
[--shmid | -I <id>] --shm | -S <shmkeyfile>
[--shmid | -I <id>] --file | -f <tmpfsfile>
[--huge | -u] [--touch | -T]
memory policy | --dump | -d | --dump-nodes | -D
memory policy is --interleave | -i, --preferred | -p, --membind | -m, --localalloc | -l
<nodes> is a comma delimited list of node numbers or A-B ranges or all.
Instead of a number a node can also be:
netdev:DEV the node connected to network device DEV
file:PATH the node the block device of path is connected to
ip:HOST the node of the network device host routes through
block:PATH the node of block device path
pci:[seg:]bus:dev[:func] The node of a PCI device
<cpus> is a comma delimited list of cpu numbers or A-B ranges or all
all ranges can be inverted with !
all numbers and ranges can be made cpuset-relative with +
the old --cpubind argument is deprecated.
use --cpunodebind or --physcpubind instead
<length> can have g (GB), m (MB) or k (KB) suffixes
```
### Versions
Python 3.10
Intel-openmp: 2024
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @frank-wei | needs reproduction,module: cpu,triaged,module: openmp,module: intel | low | Critical |
2,535,661,925 | tensorflow | tf.python.ops.array_ops.transpose aborts with "Check failed: d >= 0 (0 vs. -1)" | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf-nightly 2.18.0
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04.3 LTS
### Mobile device
_No response_
### Python version
3.10.14
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I encountered an `aborted issue` in TensorFlow when I used API `array_ops.transpose`
### Standalone code to reproduce the issue
```shell
import numpy as np
from tensorflow.python.ops import array_ops
x =np.arange(0, 8).reshape([2, 4]).astype(np.float32)
y = np.array([-1, 0]).astype(np.int32)
array_ops.transpose(x, y,conjugate = False)
```
### Relevant log output
```shell
2024-09-19 16:16:30.137164: F tensorflow/core/framework/tensor_shape.cc:356] Check failed: d >= 0 (0 vs. -1)
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,535,706,144 | langchain | PyPDFLoader parse pdf with extract_images=True encountered an error | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
PyPDFLoader
### Error Message and Stack Trace (if applicable)
```
File "envs\xxx\Lib\site-packages\langchain_core\document_loaders\base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "envs\xxx\Lib\site-packages\langchain_community\document_loaders\pdf.py", line 202, in lazy_load
yield from self.parser.parse(blob)
^^^^^^^^^^^^^^^^^^^^^^^
File "envs\xxx\Lib\site-packages\langchain_core\document_loaders\base.py", line 126, in parse
return list(self.lazy_parse(blob))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "envs\xxx\Lib\site-packages\langchain_community\document_loaders\parsers\pdf.py", line 124, in lazy_parse
yield from [
^
File "envs\xxx\Lib\site-packages\langchain_community\document_loaders\parsers\pdf.py", line 127, in <listcomp>
+ self._extract_images_from_page(page),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "envs\xxx\Lib\site-packages\langchain_community\document_loaders\parsers\pdf.py", line 142, in _extract_images_from_page
if xObject[obj]["/Filter"][1:] in _PDF_FILTER_WITHOUT_LOSS:
~~~~~~~~~~~~^^^^^^^^^^^
File "envs\xxx\Lib\site-packages\pypdf\generic\_data_structures.py", line 319, in __getitem__
return dict.__getitem__(self, key).get_object()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: '/Filter'
```
### Description

### System Info
> langchain: 0.2.12
> langchain_community: 0.2.11 | 🤖:bug | low | Critical |
2,535,718,949 | deno | Deno RAM usage on Linux 5x higher than on Windows | Hi,
I am developing a deno application that maily uses npm:mqtt.js and a rather big generated openapi REST client to connect to a closed source webserver. Now, when I start my application on windows, it reports a ram usage of ~100Mb which is completely fine for me. However, when I start the same application on Linux (e.g. WSL2, CentOs or on a small ARM machine), the RAM usage is ~500Mb.
I could not find any information on the cause of this. Is this to be expected or can you provide me with some steps to analyze the issue? It is hard for me to share any code as I would need to completely strip the application.
Thank you,
Marius | perf | low | Minor |
2,535,788,362 | pytorch | [CompiledAutograd] "compiled_args nyi" in CppFunctionTensorPreHook | ### 🐛 Describe the bug
Hi, I met this error when testing compiled autograd:
```
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: RuntimeError: compiled_args nyi, see [Note: Compiled Autograd] N5torch8autograd24CppFunctionTensorPreHookE
```
Part of the using codes are:
```
from torch._dynamo import compiled_autograd
def compiler_fn(gm):
return torch.compile(gm, fullgraph=True, backend="inductor")
with compiled_autograd.enable(compiler_fn):
train_step()
```
How can I locate the hook that will use CppFunctionTensorPreHook? Thanks!
### Error logs
_No response_
### Minified repro
_No response_
### Versions
2.4
cc @ezyang @chauhang @penguinwu @xmfan @yf225 | triaged,oncall: pt2,module: compiled autograd | low | Critical |
2,535,824,100 | rust | unexpectedly compiler panick | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
let _abort_handler = set.spawn(async move {
let span = span!(Level::TRACE, HANDLER);
let _ = span.enter();
let _ = start_module(handler, listener, token)
.instrument(span)
.await;
});
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-unknown-linux-gnu
release: 1.80.1
LLVM version: 18.1.7
```
### Error output
```
thread 'rustc' panicked at compiler/rustc_hir/src/definitions.rs:389:13:
("Failed to extract DefId", def_kind, PackedFingerprint(Fingerprint(8995154711048952027, 5129404986291511177)))
stack backtrace:
0: 0x7fccf462bf05 - std::backtrace_rs::backtrace::libunwind::trace::h23054e327d0d4b55
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
1: 0x7fccf462bf05 - std::backtrace_rs::backtrace::trace_unsynchronized::h0cc587407d7f7f64
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x7fccf462bf05 - std::sys_common::backtrace::_print_fmt::h4feeb59774730d6b
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:68:5
3: 0x7fccf462bf05 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hd736fd5964392270
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:44:22
4: 0x7fccf467cc4b - core::fmt::rt::Argument::fmt::h105051d8ea1ade1e
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/fmt/rt.rs:165:63
5: 0x7fccf467cc4b - core::fmt::write::hc6043626647b98ea
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/fmt/mod.rs:1168:21
6: 0x7fccf4620bdf - std::io::Write::write_fmt::h0d24b3e0473045db
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/mod.rs:1835:15
7: 0x7fccf462bcde - std::sys_common::backtrace::_print::h62df6fc36dcebfc8
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:47:5
8: 0x7fccf462bcde - std::sys_common::backtrace::print::h45eb8174d25a1e76
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:34:9
9: 0x7fccf462e719 - std::panicking::default_hook::{{closure}}::haf3f0170eb4f3b53
10: 0x7fccf462e4ba - std::panicking::default_hook::hb5d3b27aa9f6dcda
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:298:9
11: 0x7fccf10150c1 - std[fba9fafec3bdacf8]::panicking::update_hook::<alloc[a325a9cea6fa5e89]::boxed::Box<rustc_driver_impl[ce01f96e2e949677]::install_ice_hook::{closure#0}>>::{closure#0}
12: 0x7fccf462ee4b - <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call::h2026a29033a1b9f6
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/alloc/src/boxed.rs:2077:9
13: 0x7fccf462ee4b - std::panicking::rust_panic_with_hook::h6b49d59f86ee588c
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:799:13
14: 0x7fccf462ebc4 - std::panicking::begin_panic_handler::{{closure}}::hd4c2f7ed79b82b70
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:664:13
15: 0x7fccf462c3c9 - std::sys_common::backtrace::__rust_end_short_backtrace::h2946d6d32d7ea1ad
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:171:18
16: 0x7fccf462e8f7 - rust_begin_unwind
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:652:5
17: 0x7fccf46791e3 - core::panicking::panic_fmt::ha02418e5cd774672
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:72:14
18: 0x7fccf10b8e98 - <rustc_hir[b82cb5fa732ae2da]::definitions::Definitions>::local_def_path_hash_to_def_id::err
19: 0x7fccf24c8e8e - <rustc_query_system[b257ee99c2874caa]::dep_graph::dep_node::DepNode as rustc_middle[ecc07153edf3c281]::dep_graph::dep_node::DepNodeExt>::extract_def_id
20: 0x7fccf1865f7a - <rustc_query_impl[c1633093ec927e0e]::plumbing::query_callback<rustc_query_impl[c1633093ec927e0e]::query_impl::def_kind::QueryType>::{closure#0} as core[1a380081440346cb]::ops::function::FnOnce<(rustc_middle[ecc07153edf3c281]::ty::context::TyCtxt, rustc_query_system[b257ee99c2874caa]::dep_graph::dep_node::DepNode)>>::call_once
21: 0x7fccf242efa9 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
22: 0x7fccf242ef22 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
23: 0x7fccf242ef22 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
24: 0x7fccf242ef22 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
25: 0x7fccf242ef22 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
26: 0x7fccf2969dc4 - rustc_query_system[b257ee99c2874caa]::query::plumbing::try_execute_query::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::DefaultCache<rustc_type_ir[8d868667529acadd]::canonical::Canonical<rustc_middle[ecc07153edf3c281]::ty::context::TyCtxt, rustc_middle[ecc07153edf3c281]::ty::ParamEnvAnd<rustc_middle[ecc07153edf3c281]::ty::predicate::Predicate>>, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 2usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt, true>
27: 0x7fccf2968631 - rustc_query_impl[c1633093ec927e0e]::query_impl::evaluate_obligation::get_query_incr::__rust_end_short_backtrace
28: 0x7fccef361bab - <rustc_trait_selection[bf98a35716bfd7e5]::traits::fulfill::FulfillProcessor as rustc_data_structures[25e6784d61918b0d]::obligation_forest::ObligationProcessor>::process_obligation
29: 0x7fccf2597340 - <rustc_data_structures[25e6784d61918b0d]::obligation_forest::ObligationForest<rustc_trait_selection[bf98a35716bfd7e5]::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection[bf98a35716bfd7e5]::traits::fulfill::FulfillProcessor>
30: 0x7fccf29503fd - <rustc_trait_selection[bf98a35716bfd7e5]::traits::engine::ObligationCtxt>::make_canonicalized_query_response::<()>
31: 0x7fccf295a042 - rustc_traits[6708e369e9f7a263]::type_op::type_op_prove_predicate
32: 0x7fccf295946f - rustc_query_impl[c1633093ec927e0e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c1633093ec927e0e]::query_impl::type_op_prove_predicate::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>
33: 0x7fccf2994156 - rustc_query_system[b257ee99c2874caa]::query::plumbing::try_execute_query::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::DefaultCache<rustc_type_ir[8d868667529acadd]::canonical::Canonical<rustc_middle[ecc07153edf3c281]::ty::context::TyCtxt, rustc_middle[ecc07153edf3c281]::ty::ParamEnvAnd<rustc_middle[ecc07153edf3c281]::traits::query::type_op::ProvePredicate>>, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt, true>
34: 0x7fccf2993a56 - rustc_query_impl[c1633093ec927e0e]::query_impl::type_op_prove_predicate::get_query_incr::__rust_end_short_backtrace
35: 0x7fccf25e34c4 - <rustc_borrowck[2d0af345905b1241]::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle[ecc07153edf3c281]::ty::ParamEnvAnd<rustc_middle[ecc07153edf3c281]::traits::query::type_op::ProvePredicate>>
36: 0x7fccf2593136 - <rustc_borrowck[2d0af345905b1241]::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
37: 0x7fccf25dd585 - <rustc_borrowck[2d0af345905b1241]::type_check::TypeVerifier as rustc_middle[ecc07153edf3c281]::mir::visit::Visitor>::visit_constant
38: 0x7fccefe025ea - <rustc_borrowck[2d0af345905b1241]::type_check::TypeVerifier as rustc_middle[ecc07153edf3c281]::mir::visit::Visitor>::visit_body
39: 0x7fccefc63b25 - rustc_borrowck[2d0af345905b1241]::type_check::type_check
40: 0x7fccefc225e2 - rustc_borrowck[2d0af345905b1241]::nll::compute_regions
41: 0x7fccf32c2ebe - rustc_borrowck[2d0af345905b1241]::do_mir_borrowck
42: 0x7fccf32b3c3b - rustc_query_impl[c1633093ec927e0e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c1633093ec927e0e]::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>
43: 0x7fccf255d24d - rustc_query_system[b257ee99c2874caa]::query::plumbing::try_execute_query::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::VecCache<rustc_span[4d50fd03223eefaa]::def_id::LocalDefId, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt, true>
44: 0x7fccf255bc36 - rustc_query_impl[c1633093ec927e0e]::plumbing::force_from_dep_node::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::VecCache<rustc_span[4d50fd03223eefaa]::def_id::LocalDefId, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>, false, false, false>>
45: 0x7fccf34eef6d - <rustc_query_impl[c1633093ec927e0e]::plumbing::query_callback<rustc_query_impl[c1633093ec927e0e]::query_impl::mir_borrowck::QueryType>::{closure#0} as core[1a380081440346cb]::ops::function::FnOnce<(rustc_middle[ecc07153edf3c281]::ty::context::TyCtxt, rustc_query_system[b257ee99c2874caa]::dep_graph::dep_node::DepNode)>>::call_once
46: 0x7fccf242efa9 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
47: 0x7fccf287b725 - rustc_query_system[b257ee99c2874caa]::query::plumbing::ensure_must_run::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::VecCache<rustc_span[4d50fd03223eefaa]::def_id::LocalDefId, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
48: 0x7fccf287b45c - rustc_query_impl[c1633093ec927e0e]::query_impl::mir_borrowck::get_query_incr::__rust_end_short_backtrace
49: 0x7fccf2e93744 - rustc_interface[c31201428b712578]::passes::analysis
50: 0x7fccf2e93025 - rustc_query_impl[c1633093ec927e0e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c1633093ec927e0e]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 1usize]>>
51: 0x7fccf3139ef5 - rustc_query_system[b257ee99c2874caa]::query::plumbing::try_execute_query::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::SingleCache<rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt, true>
52: 0x7fccf3139b38 - rustc_query_impl[c1633093ec927e0e]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
53: 0x7fccf301224d - rustc_interface[c31201428b712578]::interface::run_compiler::<core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>, rustc_driver_impl[ce01f96e2e949677]::run_compiler::{closure#0}>::{closure#1}
54: 0x7fccf3147869 - std[fba9fafec3bdacf8]::sys_common::backtrace::__rust_begin_short_backtrace::<rustc_interface[c31201428b712578]::util::run_in_thread_with_globals<rustc_interface[c31201428b712578]::interface::run_compiler<core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>, rustc_driver_impl[ce01f96e2e949677]::run_compiler::{closure#0}>::{closure#1}, core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>>
55: 0x7fccf314766a - <<std[fba9fafec3bdacf8]::thread::Builder>::spawn_unchecked_<rustc_interface[c31201428b712578]::util::run_in_thread_with_globals<rustc_interface[c31201428b712578]::interface::run_compiler<core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>, rustc_driver_impl[ce01f96e2e949677]::run_compiler::{closure#0}>::{closure#1}, core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>>::{closure#2} as core[1a380081440346cb]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
56: 0x7fccf4638e3b - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hdf5fcef8be77a431
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/alloc/src/boxed.rs:2063:9
57: 0x7fccf4638e3b - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h8e8c5ceee46ee198
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/alloc/src/boxed.rs:2063:9
58: 0x7fccf4638e3b - std::sys::pal::unix::thread::Thread::new::thread_start::hb85dbfa54ba503d6
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys/pal/unix/thread.rs:108:17
59: 0x7fccedea4dab - <unknown>
60: 0x7fccedf269f8 - <unknown>
61: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.80.1 (3f5fd8dd4 2024-08-06) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation `{coroutine witness@zxy/src/run/runner.rs:22:1: 79:2}: core::marker::Send`
#1 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<{async block@zxy/src/main.rs:135:48: 141:14} as core::marker::Send>, polarity:Positive), bound_vars: [] } }`
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 type_of(thread 'rustc' panicked at compiler/rustc_hir/src/definitions.rs:389:13:
("Failed to extract DefId", type_of, PackedFingerprint(Fingerprint(8995154711048952027, 5129404986291511177)))
stack backtrace:
0: 0x7fccf462bf05 - std::backtrace_rs::backtrace::libunwind::trace::h23054e327d0d4b55
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
1: 0x7fccf462bf05 - std::backtrace_rs::backtrace::trace_unsynchronized::h0cc587407d7f7f64
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x7fccf462bf05 - std::sys_common::backtrace::_print_fmt::h4feeb59774730d6b
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:68:5
3: 0x7fccf462bf05 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hd736fd5964392270
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:44:22
4: 0x7fccf467cc4b - core::fmt::rt::Argument::fmt::h105051d8ea1ade1e
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/fmt/rt.rs:165:63
5: 0x7fccf467cc4b - core::fmt::write::hc6043626647b98ea
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/fmt/mod.rs:1168:21
6: 0x7fccf4620bdf - std::io::Write::write_fmt::h0d24b3e0473045db
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/mod.rs:1835:15
7: 0x7fccf462bcde - std::sys_common::backtrace::_print::h62df6fc36dcebfc8
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:47:5
8: 0x7fccf462bcde - std::sys_common::backtrace::print::h45eb8174d25a1e76
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:34:9
9: 0x7fccf462e719 - std::panicking::default_hook::{{closure}}::haf3f0170eb4f3b53
10: 0x7fccf462e4ba - std::panicking::default_hook::hb5d3b27aa9f6dcda
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:298:9
11: 0x7fccf10150c1 - std[fba9fafec3bdacf8]::panicking::update_hook::<alloc[a325a9cea6fa5e89]::boxed::Box<rustc_driver_impl[ce01f96e2e949677]::install_ice_hook::{closure#0}>>::{closure#0}
12: 0x7fccf462ee4b - <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call::h2026a29033a1b9f6
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/alloc/src/boxed.rs:2077:9
13: 0x7fccf462ee4b - std::panicking::rust_panic_with_hook::h6b49d59f86ee588c
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:799:13
14: 0x7fccf462ebc4 - std::panicking::begin_panic_handler::{{closure}}::hd4c2f7ed79b82b70
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:664:13
15: 0x7fccf462c3c9 - std::sys_common::backtrace::__rust_end_short_backtrace::h2946d6d32d7ea1ad
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys_common/backtrace.rs:171:18
16: 0x7fccf462e8f7 - rust_begin_unwind
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:652:5
17: 0x7fccf46791e3 - core::panicking::panic_fmt::ha02418e5cd774672
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:72:14
18: 0x7fccf10b8e98 - <rustc_hir[b82cb5fa732ae2da]::definitions::Definitions>::local_def_path_hash_to_def_id::err
19: 0x7fccf24c8e8e - <rustc_query_system[b257ee99c2874caa]::dep_graph::dep_node::DepNode as rustc_middle[ecc07153edf3c281]::dep_graph::dep_node::DepNodeExt>::extract_def_id
20: 0x7fccf13a6bf1 - rustc_interface[c31201428b712578]::callbacks::dep_node_debug
21: 0x7fccf18d1ed7 - <rustc_query_system[b257ee99c2874caa]::dep_graph::dep_node::DepNode as core[1a380081440346cb]::fmt::Debug>::fmt
22: 0x7fccf467cc4b - core::fmt::rt::Argument::fmt::h105051d8ea1ade1e
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/fmt/rt.rs:165:63
23: 0x7fccf467cc4b - core::fmt::write::hc6043626647b98ea
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/fmt/mod.rs:1168:21
24: 0x7fccf461ed2c - std::io::Write::write_fmt::he0bbfedff2a3d05f
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/mod.rs:1835:15
25: 0x7fccf461ed2c - <&std::io::stdio::Stderr as std::io::Write>::write_fmt::hf5e7a2612a23a5f3
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/stdio.rs:1019:9
26: 0x7fccf461f6d8 - <std::io::stdio::Stderr as std::io::Write>::write_fmt::he81757d53901a839
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/stdio.rs:993:9
27: 0x7fccf461f6d8 - std::io::stdio::print_to::he3833d5f094ba64e
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/stdio.rs:1117:21
28: 0x7fccf461f6d8 - std::io::stdio::_eprint::h91d9d2e9cdee0bf0
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/stdio.rs:1238:5
29: 0x7fccf1805203 - rustc_query_system[b257ee99c2874caa]::dep_graph::graph::print_markframe_trace::<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>
30: 0x7fccf242f5ef - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
31: 0x7fccf242ef22 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
32: 0x7fccf242ef22 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
33: 0x7fccf242ef22 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
34: 0x7fccf242ef22 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
35: 0x7fccf2969dc4 - rustc_query_system[b257ee99c2874caa]::query::plumbing::try_execute_query::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::DefaultCache<rustc_type_ir[8d868667529acadd]::canonical::Canonical<rustc_middle[ecc07153edf3c281]::ty::context::TyCtxt, rustc_middle[ecc07153edf3c281]::ty::ParamEnvAnd<rustc_middle[ecc07153edf3c281]::ty::predicate::Predicate>>, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 2usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt, true>
36: 0x7fccf2968631 - rustc_query_impl[c1633093ec927e0e]::query_impl::evaluate_obligation::get_query_incr::__rust_end_short_backtrace
37: 0x7fccef361bab - <rustc_trait_selection[bf98a35716bfd7e5]::traits::fulfill::FulfillProcessor as rustc_data_structures[25e6784d61918b0d]::obligation_forest::ObligationProcessor>::process_obligation
38: 0x7fccf2597340 - <rustc_data_structures[25e6784d61918b0d]::obligation_forest::ObligationForest<rustc_trait_selection[bf98a35716bfd7e5]::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection[bf98a35716bfd7e5]::traits::fulfill::FulfillProcessor>
39: 0x7fccf29503fd - <rustc_trait_selection[bf98a35716bfd7e5]::traits::engine::ObligationCtxt>::make_canonicalized_query_response::<()>
40: 0x7fccf295a042 - rustc_traits[6708e369e9f7a263]::type_op::type_op_prove_predicate
41: 0x7fccf295946f - rustc_query_impl[c1633093ec927e0e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c1633093ec927e0e]::query_impl::type_op_prove_predicate::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>
42: 0x7fccf2994156 - rustc_query_system[b257ee99c2874caa]::query::plumbing::try_execute_query::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::DefaultCache<rustc_type_ir[8d868667529acadd]::canonical::Canonical<rustc_middle[ecc07153edf3c281]::ty::context::TyCtxt, rustc_middle[ecc07153edf3c281]::ty::ParamEnvAnd<rustc_middle[ecc07153edf3c281]::traits::query::type_op::ProvePredicate>>, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt, true>
43: 0x7fccf2993a56 - rustc_query_impl[c1633093ec927e0e]::query_impl::type_op_prove_predicate::get_query_incr::__rust_end_short_backtrace
44: 0x7fccf25e34c4 - <rustc_borrowck[2d0af345905b1241]::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle[ecc07153edf3c281]::ty::ParamEnvAnd<rustc_middle[ecc07153edf3c281]::traits::query::type_op::ProvePredicate>>
45: 0x7fccf2593136 - <rustc_borrowck[2d0af345905b1241]::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
46: 0x7fccf25dd585 - <rustc_borrowck[2d0af345905b1241]::type_check::TypeVerifier as rustc_middle[ecc07153edf3c281]::mir::visit::Visitor>::visit_constant
47: 0x7fccefe025ea - <rustc_borrowck[2d0af345905b1241]::type_check::TypeVerifier as rustc_middle[ecc07153edf3c281]::mir::visit::Visitor>::visit_body
48: 0x7fccefc63b25 - rustc_borrowck[2d0af345905b1241]::type_check::type_check
49: 0x7fccefc225e2 - rustc_borrowck[2d0af345905b1241]::nll::compute_regions
50: 0x7fccf32c2ebe - rustc_borrowck[2d0af345905b1241]::do_mir_borrowck
51: 0x7fccf32b3c3b - rustc_query_impl[c1633093ec927e0e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c1633093ec927e0e]::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>
52: 0x7fccf255d24d - rustc_query_system[b257ee99c2874caa]::query::plumbing::try_execute_query::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::VecCache<rustc_span[4d50fd03223eefaa]::def_id::LocalDefId, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt, true>
53: 0x7fccf255bc36 - rustc_query_impl[c1633093ec927e0e]::plumbing::force_from_dep_node::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::VecCache<rustc_span[4d50fd03223eefaa]::def_id::LocalDefId, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 8usize]>>, false, false, false>>
54: 0x7fccf34eef6d - <rustc_query_impl[c1633093ec927e0e]::plumbing::query_callback<rustc_query_impl[c1633093ec927e0e]::query_impl::mir_borrowck::QueryType>::{closure#0} as core[1a380081440346cb]::ops::function::FnOnce<(rustc_middle[ecc07153edf3c281]::ty::context::TyCtxt, rustc_query_system[b257ee99c2874caa]::dep_graph::dep_node::DepNode)>>::call_once
55: 0x7fccf242efa9 - <rustc_query_system[b257ee99c2874caa]::dep_graph::graph::DepGraphData<rustc_middle[ecc07153edf3c281]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
56: 0x7fccf287b725 - rustc_query_system[b257ee99c2874caa]::query::plumbing::ensure_must_run::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::VecCache<rustc_span[4d50fd03223eefaa]::def_id::LocalDefId, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt>
57: 0x7fccf287b45c - rustc_query_impl[c1633093ec927e0e]::query_impl::mir_borrowck::get_query_incr::__rust_end_short_backtrace
58: 0x7fccf2e93744 - rustc_interface[c31201428b712578]::passes::analysis
59: 0x7fccf2e93025 - rustc_query_impl[c1633093ec927e0e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c1633093ec927e0e]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 1usize]>>
60: 0x7fccf3139ef5 - rustc_query_system[b257ee99c2874caa]::query::plumbing::try_execute_query::<rustc_query_impl[c1633093ec927e0e]::DynamicConfig<rustc_query_system[b257ee99c2874caa]::query::caches::SingleCache<rustc_middle[ecc07153edf3c281]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[c1633093ec927e0e]::plumbing::QueryCtxt, true>
61: 0x7fccf3139b38 - rustc_query_impl[c1633093ec927e0e]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
62: 0x7fccf301224d - rustc_interface[c31201428b712578]::interface::run_compiler::<core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>, rustc_driver_impl[ce01f96e2e949677]::run_compiler::{closure#0}>::{closure#1}
63: 0x7fccf3147869 - std[fba9fafec3bdacf8]::sys_common::backtrace::__rust_begin_short_backtrace::<rustc_interface[c31201428b712578]::util::run_in_thread_with_globals<rustc_interface[c31201428b712578]::interface::run_compiler<core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>, rustc_driver_impl[ce01f96e2e949677]::run_compiler::{closure#0}>::{closure#1}, core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>>
64: 0x7fccf314766a - <<std[fba9fafec3bdacf8]::thread::Builder>::spawn_unchecked_<rustc_interface[c31201428b712578]::util::run_in_thread_with_globals<rustc_interface[c31201428b712578]::interface::run_compiler<core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>, rustc_driver_impl[ce01f96e2e949677]::run_compiler::{closure#0}>::{closure#1}, core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[1a380081440346cb]::result::Result<(), rustc_span[4d50fd03223eefaa]::ErrorGuaranteed>>::{closure#2} as core[1a380081440346cb]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
65: 0x7fccf4638e3b - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hdf5fcef8be77a431
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/alloc/src/boxed.rs:2063:9
66: 0x7fccf4638e3b - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h8e8c5ceee46ee198
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/alloc/src/boxed.rs:2063:9
67: 0x7fccf4638e3b - std::sys::pal::unix::thread::Thread::new::thread_start::hb85dbfa54ba503d6
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/sys/pal/unix/thread.rs:108:17
68: 0x7fccedea4dab - <unknown>
69: 0x7fccedf269f8 - <unknown>
70: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.80.1 (3f5fd8dd4 2024-08-06) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation `{coroutine witness@zxy/src/run/runner.rs:22:1: 79:2}: core::marker::Send`
#1 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<{async block@zxy/src/main.rs:135:48: 141:14} as core::marker::Send>, polarity:Positive), bound_vars: [] } }`
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 mir_borrowck(zxy[7cd5]::main)
end of try_mark_green dep node stack
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at compiler/rustc_hir/src/definitions.rs:389:13:
("Failed to extract DefId", def_kind, PackedFingerprint(Fingerprint(8995154711048952027, 5129404986291511177)))
stack backtrace:
0: rust_begin_unwind
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:652:5
1: core::panicking::panic_fmt
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:72:14
2: <rustc_hir::definitions::Definitions>::local_def_path_hash_to_def_id::err
3: <rustc_query_system::dep_graph::dep_node::DepNode as rustc_middle::dep_graph::dep_node::DepNodeExt>::extract_def_id
4: <rustc_query_impl::plumbing::query_callback<rustc_query_impl::query_impl::def_kind::QueryType>::{closure#0} as core::ops::function::FnOnce<(rustc_middle::ty::context::TyCtxt, rustc_query_system::dep_graph::dep_node::DepNode)>>::call_once
5: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
6: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
7: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
8: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
9: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
10: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::DefaultCache<rustc_type_ir::canonical::Canonical<rustc_middle::ty::context::TyCtxt, rustc_middle::ty::ParamEnvAnd<rustc_middle::ty::predicate::Predicate>>, rustc_middle::query::erase::Erased<[u8; 2]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
11: <rustc_trait_selection::traits::fulfill::FulfillProcessor as rustc_data_structures::obligation_forest::ObligationProcessor>::process_obligation
12: <rustc_data_structures::obligation_forest::ObligationForest<rustc_trait_selection::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection::traits::fulfill::FulfillProcessor>
13: <rustc_trait_selection::traits::engine::ObligationCtxt>::make_canonicalized_query_response::<()>
14: rustc_traits::type_op::type_op_prove_predicate
[... omitted 1 frame ...]
15: <rustc_borrowck::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle::ty::ParamEnvAnd<rustc_middle::traits::query::type_op::ProvePredicate>>
16: <rustc_borrowck::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
17: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_constant
18: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_body
19: rustc_borrowck::type_check::type_check
20: rustc_borrowck::nll::compute_regions
21: rustc_borrowck::do_mir_borrowck
[... omitted 5 frames ...]
22: rustc_interface::passes::analysis
[... omitted 1 frame ...]
23: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.80.1 (3f5fd8dd4 2024-08-06) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation `{coroutine witness@zxy/src/run/runner.rs:22:1: 79:2}: core::marker::Send`
#1 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<{async block@zxy/src/main.rs:135:48: 141:14} as core::marker::Send>, polarity:Positive), bound_vars: [] } }`
#2 [mir_borrowck] borrow-checking `main::{closure#0}`
#3 [analysis] running analysis passes on this crate
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 type_of(thread 'rustc' panicked at compiler/rustc_hir/src/definitions.rs:389:13:
("Failed to extract DefId", type_of, PackedFingerprint(Fingerprint(8995154711048952027, 5129404986291511177)))
stack backtrace:
0: rust_begin_unwind
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:652:5
1: core::panicking::panic_fmt
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:72:14
2: <rustc_hir::definitions::Definitions>::local_def_path_hash_to_def_id::err
3: <rustc_query_system::dep_graph::dep_node::DepNode as rustc_middle::dep_graph::dep_node::DepNodeExt>::extract_def_id
4: rustc_interface::callbacks::dep_node_debug
5: <rustc_query_system::dep_graph::dep_node::DepNode as core::fmt::Debug>::fmt
6: core::fmt::rt::Argument::fmt
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/fmt/rt.rs:165:63
7: core::fmt::write
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/fmt/mod.rs:1168:21
8: std::io::Write::write_fmt
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/mod.rs:1835:15
9: <&std::io::stdio::Stderr as std::io::Write>::write_fmt
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/stdio.rs:1019:9
10: <std::io::stdio::Stderr as std::io::Write>::write_fmt
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/stdio.rs:993:9
11: std::io::stdio::print_to
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/stdio.rs:1117:21
12: std::io::stdio::_eprint
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/io/stdio.rs:1238:5
13: rustc_query_system::dep_graph::graph::print_markframe_trace::<rustc_middle::dep_graph::DepsType>
14: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
15: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
16: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
17: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
18: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
19: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::DefaultCache<rustc_type_ir::canonical::Canonical<rustc_middle::ty::context::TyCtxt, rustc_middle::ty::ParamEnvAnd<rustc_middle::ty::predicate::Predicate>>, rustc_middle::query::erase::Erased<[u8; 2]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
20: <rustc_trait_selection::traits::fulfill::FulfillProcessor as rustc_data_structures::obligation_forest::ObligationProcessor>::process_obligation
21: <rustc_data_structures::obligation_forest::ObligationForest<rustc_trait_selection::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection::traits::fulfill::FulfillProcessor>
22: <rustc_trait_selection::traits::engine::ObligationCtxt>::make_canonicalized_query_response::<()>
23: rustc_traits::type_op::type_op_prove_predicate
[... omitted 1 frame ...]
24: <rustc_borrowck::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle::ty::ParamEnvAnd<rustc_middle::traits::query::type_op::ProvePredicate>>
25: <rustc_borrowck::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
26: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_constant
27: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_body
28: rustc_borrowck::type_check::type_check
29: rustc_borrowck::nll::compute_regions
30: rustc_borrowck::do_mir_borrowck
[... omitted 5 frames ...]
31: rustc_interface::passes::analysis
[... omitted 1 frame ...]
32: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.80.1 (3f5fd8dd4 2024-08-06) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation `{coroutine witness@zxy/src/run/runner.rs:22:1: 79:2}: core::marker::Send`
#1 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<{async block@zxy/src/main.rs:135:48: 141:14} as core::marker::Send>, polarity:Positive), bound_vars: [] } }`
#2 [mir_borrowck] borrow-checking `main::{closure#0}`
#3 [analysis] running analysis passes on this crate
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 mir_borrowck(zxy[7cd5]::main)
end of try_mark_green dep node stack
```
</p>
</details>
| I-ICE,T-compiler,A-incr-comp,C-bug,S-needs-repro | low | Critical |
2,535,830,323 | pytorch | Fault tolerance with k8s | ### 🚀 The feature, motivation and pitch
I am working on training fault tolerance.
We want to restart only one training node when there is a hardware failure, but the existing design does not allow the agent exit and will restart the training process. I hope that it can be realized that the agent can exit when an error occurs, and other non-faulty training node agents can survive.
### Alternatives
1. Write the number of restarts to etcd/tcp_store
2. Add configuration parameters. If a training process exits with error, the agent exits
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,needs design | low | Critical |
2,535,842,674 | godot | SurfaceFlinger error messages on Android device using adb logcat | ### Tested versions
4.3.stable.mono
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Mobile) - integrated Intel(R) Iris(R) Xe Graphics (Intel Corporation; 31.0.101.5590) - 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz (8 Threads)
### Issue description
When running a basic project using Remote Debug on an Android device (Galaxy S10e) the following message continually appears in adb logcat:
606 E SurfaceFlinger: Attempt to update InputPolicyFlags without permission ACCESS_SURFACE_FLINGER
The messages stop when the app is backgrounded or closed so it's definitely not just the device. Almost looks like some kind of inifinite loop, which is making it impossible to find anything else in the debug log.
Have tried checking the "Access Surface Flinger" permission in the export settings but this doesn't seem to make any difference.
### Steps to reproduce
Reproducible with a minimal Godot project.
No code, just a PanelContainer scene containing a Label.
### Minimal reproduction project (MRP)
N/A | platform:android | low | Critical |
2,535,866,119 | vscode | MSYS2 git looks broken | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.1
- OS Version: Win10 home
As I found, actual version of node-js cannot run .bat scripts by default: https://nodejs.org/en/blog/vulnerability/april-2024-security-releases-2
That means that this receipt #4651 now is not working.
Steps to Reproduce:
follow the original post, or if you don't have MSYS, just do the following:
1. Create a `git-wrap.bat` with a single line `C:\path-to-git\git.exe %*`
2. set `"git.path": "C:/bla-bla/git-wrap.bat",` in the `settings.json`
3. reload vscode and check Output -> Git
```
2024-09-19 13:39:54.871 [info] [main] Log level: Info
2024-09-19 13:39:54.872 [info] [main] Validating found git in: "C:/msys64/git-wrap.bat"
2024-09-19 13:39:54.872 [info] Unable to find git on the PATH: "C:/msys64/git-wrap.bat". Error: spawn EINVAL
2024-09-19 13:39:54.872 [info] [main] Validating found git in: "C:\Program Files\Git\cmd\git.exe"
2024-09-19 13:39:54.872 [info] [main] Validating found git in: "C:\Program Files (x86)\Git\cmd\git.exe"
2024-09-19 13:39:54.872 [info] [main] Validating found git in: "C:\Program Files\Git\cmd\git.exe"
2024-09-19 13:39:54.872 [info] [main] Validating found git in: "C:\Users\esaul\AppData\Local\Programs\Git\cmd\git.exe"
2024-09-19 13:40:00.761 [warning] Unable to find git. Error: not found: git.exe
2024-09-19 13:40:00.776 [warning] [main] Failed to create model: Error: Git installation not found.
```
| bug,git | low | Critical |
2,535,867,827 | TypeScript | Source mappings are missing for serialized properties | ### 🔎 Search Terms
source map declaration map properties symbols navigation definition goto
### 🕗 Version & Regression Information
- This is the behavior in every version I tried
### ⏯ Playground Link
N/A
### 💻 Code
```ts
// api.ts
type ValidateShape<T> = {
[K in keyof T]: T[K];
};
function test<T>(arg: ValidateShape<T>) {
function createCaller<T>(arg: T): () => {
[K in keyof T]: () => T[K];
} {
return null as any;
}
return {
createCaller: createCaller(arg),
};
}
const api = test({
foo/*target*/: 1,
bar: "",
});
export const createCaller = api.createCaller;
// main.ts
import { createCaller } from "./api";
const caller = createCaller();
caller.foo/*source*/;
```
### 🙁 Actual behavior
Please consider the 2 files above as 2 separate projects.
When those files are in the same project we can successfully go from the source marker to the target marker with "go to definition" and others like it. The same works with project references on.
However, when `api.ts` gets compiled with declaration maps and consumed through that then this navigation no longer works correctly.
### 🙂 Expected behavior
I'd expect this navigation to continue to work
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Minor |
2,535,872,067 | go | proposal: mime: expand on what is covered by builtinTypes | ### Proposal Details
Right now,
mime/type.go includes what seems to be a somewhat arbitrary list of built-in types:
```go
var builtinTypesLower = map[string]string{
".avif": "image/avif",
".css": "text/css; charset=utf-8",
".gif": "image/gif",
".htm": "text/html; charset=utf-8",
".html": "text/html; charset=utf-8",
".jpeg": "image/jpeg",
".jpg": "image/jpeg",
".js": "text/javascript; charset=utf-8",
".json": "application/json",
".mjs": "text/javascript; charset=utf-8",
".pdf": "application/pdf",
".png": "image/png",
".svg": "image/svg+xml",
".wasm": "application/wasm",
".webp": "image/webp",
".xml": "text/xml; charset=utf-8",
}
```
I think some guidance on what should be included in this would be good, rather than a consumer of the package not realizing there are arbitrary gaps. In the meantime I will submit a PR that will incorporate all MDN defined ["Common Types"](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types) (which also I have to admit is arbitrary, but at least covers more common usecases.) | Proposal | medium | Major |
2,535,907,102 | stable-diffusion-webui | Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
### Steps to reproduce the problem
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
### What should have happened?
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
### Console logs
```Shell
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
```
### Additional information
_No response_ | asking-for-help-with-local-system-issues | low | Critical |
2,535,952,000 | go | mime: builtinTypes ".xml" defaults to "text/xml" instead of "application/xml" | ### Go version
go version go1.23.0 windows/amd64
### Output of `go env` in your module/workspace:
```shell
DNA
```
### What did you do?
DNA
### What did you see happen?
mime/type.go uses `".xml": "text/xml; charset=utf-8",` for builtinTypes
### What did you expect to see?
[RFC 7303, Section 4.1](https://datatracker.ietf.org/doc/html/rfc7303#section-4.1) states:
> this specification alters the
handling of character encoding of text/xml and text/xml-external-
parsed-entity, treating them no differently from the respective
application/ types. However, application/xml and application/xml-
external-parsed-entity are still RECOMMENDED, to avoid possible
confusion based on the earlier distinction.
So, following the recommendatio, in mime/type.go builtinTypesLower should say:
```go
".xml": "application/xml; charset=utf-8",
``` | NeedsInvestigation | low | Major |
2,535,964,496 | deno | Built-in version manager | ## Introduction
This Issue builds upon the previous issue https://github.com/denoland/deno/issues/5214 and relates to https://github.com/denoland/deno/issues/24157
As we are closing in on a 2.0 release, I would like the team to reconsider a built in version manager similar to how rust has [rustup](https://rustup.rs/). Having a version manager provided by deno will provide more stability for older projects, as well as for newer projects.
If this gets support from the Deno team, I would be happy to contribute with the code changes.
> For the following, `denoup` will be used as a placeholder for the version management tool, it is not final.
## Features
- Deno provides a built in package manager similar to rustup
- On a system, multiple deno versions can be installed
- By default, the latest local version will be used unless other is specified
- In a project, the deno version can be specified using a new `denoVersion` or `engines.deno` property (not both, we just have to decide on the better path)
- The version specified in a project is optional, and can be a semver or a semver range
### Challenges:
Possible challenges will amongst other be backwards compatibility for deno versions that does not consider a deno version property. To get around this, the deno executable could be split up into three separate parts:
1. denoup: the version manager for the toolchain
2. deno wrapper executable: the executable that will be mapped to the `deno` command, but will only check the version restriction and forward the command to the deno executable if the restrictions are satisfied.
3. deno executable: this will be the main executable as is available now, but probably without the upgrade command as it will be handled by denoup. This executable can also be downloaded as a standalone e.g. for docker images.
### Out of scope:
- Individual modules that are imported will not be handled by separate deno versions. It is up to the respective modules to provide support for deno versions.
## User stories
A developer runs the installer script for deno. The script first installs a separate tool `denoup`, which will be executed to install the latest version of deno and set it as active. The developer would create a new deno project using `deno init`. The newly created `deno.json` will not contain the deno version property, so by default, the active deno version will be used.
A developer has deno version `1.2.3` installed on their system, and wants to clone a deno project from GitHub. The project has a `deno.json` file with the deno version `~1.0.20`. The developer runs `deno task run` which compares the versions. The semver does not match the installed version so deno gives back an error saying that the engine version does not satisfies the requirements of the package. The output states `Deno version is not satisfied, please run again with "--sync-version" or use the latest satisfied version by running "denoup use ~1.0.20" (optionally add "--latest" if you want to check for the latest allowed version)`.
A developer has deno version `1.2.3` installed on their system, and wants to clone a deno project from GitHub. The project has a `deno.json` file with the deno version `>=1.1.1 <1.2.4`. The developer runs `deno task run` which compares the versions. The active deno version is withing the range, so the script will run.
## Existing work
- [Node Version Manager (nvm)](https://github.com/nvm-sh/nvm)
- [Deno Version Manager (dvm)](https://github.com/justjavac/dvm)
- [Rustup](https://rustup.rs/) | cli,suggestion | low | Critical |
2,535,993,305 | vscode | Edit Context: Screen Reader Users Feedback | Pinging @jooyoungseo, @rperez030 and @meganrogge
Recently we have been working on adopting the EditContext API (https://developer.mozilla.org/en-US/docs/Web/API/EditContext_API) within VS Code. The EditContext is a new DOM property that can be set on DOM elements which decouples text input from the textual mutations of the DOM element. Essentially when the user focuses a DOM element on which an EditContext is set and types, the EditContext fires 'textupdate' events, and it is up to the user to mutate the DOM element with the changes from this event.
There are several reasons why we have adopted this API:
- This API has allowed us to greatly simplify the code which handles text input events.
- This API has allowed us to close numerous IME related bugs
- This API can allow us to emit customized typing information from the input events
We have an experimental setting which enables the EditContext API with ID `editor.experimentalEditContextEnabled`. We would like to ask @jooyoungseo and @rperez030 if when you have time you could try the setting with a screen reader and let us know if you see any issues on typing in the various inputs of VS Code (could be the editor, the panel chat input, the SCM view, the quick input, ie any input that accepts text insertions). We would like to gather feedback from screen reader users before enabling this setting by default. | accessibility,editor-edit-context | medium | Critical |
2,536,016,300 | vscode | Allow use of custom menu with native titlebar (on Linux and Windows) | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
_This is my attempt to find a suitable workaround to #224515, given that we will probably not see a proper resolution to that bug for a long time if ever. **I have implemented a proof-of-concept and intend to submit a PR for this shortly.**_
Add a "Force Custom Menu Style" option - boolean, default false.

If false, behavior is unchanged. If true, and "Title Bar Style" is "native", and we're running on Linux or Windows (notably: not Mac*, not web), then we display the OS native title bar, but we display the custom menu bar below it, taking care not to repeat the window title. Of course, it affects context menus as well as the main menu bar. The result: a pleasant user experience while using the native title bar.

_Debian 12, MATE_
Compare with the screenshots from [my comment here](https://github.com/microsoft/vscode/issues/224515#issuecomment-2282198197) - unfortunately I can't show the same menu, since I understand it's not possible (not allowed) to use the Marketplace from an unofficial build of VS Code, but hopefully it should still illustrate the point.
By default, you'll also get the "Command Center" and "Layout Controls" showing up next to the menu, since this menu bar is really just the custom title bar, forced to be displayed. The above screenshot, showing the classic "separate title bar and menu bar, with no additional clutter" layout, was accomplished by just using the existing settings to hide these items.
_(*I have assumed that this isn't applicable on Mac, based on basic knowledge of the Mac interface and various comments in the VS Code source, but I don't use Mac myself, so I have no way to test this. If someone else finds a reason why it'd be useful to have this feature on Mac as well, then feel free to enable it there.)_ | feature-request,system-context-menu | low | Critical |
2,536,068,798 | node | Any idea when Single executable application feature will reach stable status? | ### Affected URL(s)
https://nodejs.org/api/single-executable-applications.html
### Description of the problem
I notice that Single executable applications are still described as under "Active Development". Is there any sense when this might progress to at least a release candidate? | question,experimental,single-executable | low | Minor |
2,536,128,097 | ollama | OpenAI o1-like Chain-of-thought (CoT) inference workflow | Well, I am surprised that the "main" and "great" new feature of the new OpenAI o1 model is actually doing say "more sophisticated" inference workflow while employing something like Chain-of-thought process. Basically I understand it that even a "dumb" model can perform much better when it "thinks more" during inference. The great news they are telling us is that by "thinking more" you can get smarter, which is probably very true also for humans.
The o1 model is probably trained to come up with its own CoT workflow for any given prompt, but I think it could be interesting to try to even hardcode some kind of workflow which any standard LLM model may try to follow during inference. Basically let the model analyze the prompt from various perspectives first and then try to judge on what type of "inference workflow" it should employ.
The hardcoded workflow could look like this:
1. Prompt is submitted to the model.
2. The model asks itself couple of hard-coded questions about the prompt, maybe:
- is that some light conversation (needing soft-skills like empathy etc)
- does it look like a science problem (math, physics etc.)
- can I break the prompt down to subtasks - if yes, the workflow will feed each subtask into the model separately, then combine the result etc.
- is the problem easy/hard
- do I have all information I need (do I need to ask the user for further input/clarification)
3. The workflow would run, maybe in multiple iterations on various its levels, maybe trying to fit some "quality checks" for the answer
4. The output is presented to the user (the "hidden" thinking may be optionally viewed by user)
Anyone having the same feelings as I do about the CoT thing? Looks like even a hard-coded process may give some interesting results. | feature request | low | Major |
2,536,160,348 | next.js | Headers function causing an empty page to be returned when using Optional Catch-all Segments | ### Link to the code that reproduces this issue
https://github.com/hugohammarstrom/next-ppr-headers-repro
### To Reproduce
1. Deploy repo to vercel
2. Go to root path
3. Go to any other path and see that the page works as expected
### Current vs. Expected behavior
When using the headers() function in a page using [Optional Catch-all Segments](https://nextjs.org/docs/pages/building-your-application/routing/dynamic-routes#optional-catch-all-segments) the root returns a completely empty page. In some cases the page flickers with the correct page before returning to an empty page again. This seems to only affect the root page.
This worked previously but as of a few days ago this error started happening.
### Provide environment information
```bash
Node.js v20.17.0
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Available memory (MB): 65536
Available CPU cores: 10
Binaries:
Node: 20.17.0
npm: 10.8.3
Yarn: 3.6.1
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.159
eslint-config-next: N/A
react: 19.0.0-rc-5dcb0097-20240918
react-dom: 19.0.0-rc-5dcb0097-20240918
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Partial Prerendering (PPR), Runtime
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
I've tried downgrading the canary version to a version that I know it worked on but I'm still seeing this error. My guess is that this is something in the vercel runtime.
I've seen a couple different error messages(in different canary versions) in the logs.
`Error: invariant: cache entry required but not generated`
`Error: Invariant: postponed state should not be provided when fallback params are provided`
`Couldn't find all resumable slots by key/index during replaying. The tree doesn't match so React will fallback to client rendering.` | bug,Runtime,Partial Prerendering (PPR) | low | Critical |
2,536,236,633 | vscode | Web: `detectFullscreen` false positive on a display without dock and title area | If you run insiders.vscode.dev on a PWA in Linux with multiple monitors and WCO enabled, the window controls are on top of the icons on the right.

The issue is that there is a bad premise in dom.ts.
```ts
if (targetWindow.innerHeight === targetWindow.screen.height) {
// if the height of the window matches the screen height, we can
// safely assume that the browser is fullscreen because no browser
// chrome is taking height away (e.g. like toolbars).
return { mode: DetectedFullscreenMode.BROWSER, guess: false };
}
```
The problem with this is that if you are on a Linux machine with a secondary screen (so no task bar), with the App running as a PWA and with window controls overlay enabled, then the regular - non full screen but maximised window will be the same height as the screen height.
As such it will falsely report being fullscreen and the window controls will render on top of the icons, since the css includes:
```css
.monaco-workbench.fullscreen .part.titlebar .window-controls-container {
display: none;
background-color: transparent;
}
```
There is a secondary issue that window.screen seems to be faulty on Chrome/Linux too which means sometimes it gets the wrong screen, and that results in the PWA actually rendering correctly which makes this tricky to debug. | bug,help wanted,good first issue,workbench-window,confirmed,web | low | Critical |
2,536,262,719 | material-ui | Load order of @emotion/styled and @emotion/react is important since 6.1.0 | ### Steps to reproduce
~~Link to live example: (required)~~
Steps:
1. `npm create vite@4.4.5`
2. `npm install` in the freshly created project
3. `npm install @mui/material@6.1.0 @emotion/react@11.13.3 @emotion/styled@11.13.0`
4. Edit `vite.config.ts` to look like below
6. Run `npm run build`
7. Run `npx serve -s dist`
8. Open `localhost:3000` and check the console output to see the error
`vite.config.ts`
```ts
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [react()],
build: {
minify: false, // to be readable
rollupOptions: {
output: {
manualChunks: {
emotionStyled: ["@emotion/styled"],
emotionReact: ["@emotion/react"],
}
},
},
},
})
```
### Current behavior
When using manual chunks in the rollup config, to reduce the individual chunk size, the order in which the chunks are build matters since MUI v6.1.0.
We add manual chunking to improve load times. Usually we simply do so by placing every dependency in it's own chunk and that worked very good till MUI v6.1.0 (reverting back to MUI v6.0.2 works fine).
The issue is broken down with the chunking settings below. When changing the order of both entries, everything works just fine.
Broken:
```ts
manualChunks: {
emotionStyled: ["@emotion/styled"],
emotionReact: ["@emotion/react"],
}
```
Works:
```ts
manualChunks: {
emotionReact: ["@emotion/react"],
emotionStyled: ["@emotion/styled"],
}
```
In Firefox the error message is:
> Uncaught ReferenceError: can't access lexical declaration 'React' before initialization

`React` is imported from the `react` chunk.
### Expected behavior
Manual chunking should not effect the startup of MUI.
### Context
_No response_
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 14.6.1
Binaries:
Node: 20.17.0 - ~/.n/bin/node
npm: 10.8.2 - ~/.n/bin/npm
pnpm: 8.11.0 - /opt/homebrew/bin/pnpm
Browsers:
Chrome: 128.0.6613.138
Edge: Not Found
Safari: 17.6
npmPackages:
@emotion/react: ^11.13.3 => 11.13.3
@emotion/styled: ^11.13.0 => 11.13.0
@mui/core-downloads-tracker: 6.1.0
@mui/icons-material: ^6.1.0 => 6.1.0
@mui/material: ^6.1.0 => 6.1.0
@mui/private-theming: 6.1.0
@mui/styled-engine: 6.1.0
@mui/system: 6.1.0
@mui/types: 7.2.16
@mui/utils: 6.1.0
@types/react: ^18.2.15 => 18.3.7
react: ^18.2.0 => 18.3.1
react-dom: ^18.2.0 => 18.3.1
typescript: ^5.0.2 => 5.6.2
```
</details>
**Search keywords**: rollup chunks vite React emotion | bug 🐛,external dependency,package: material-ui | low | Critical |
2,536,309,377 | PowerToys | Ctrl WIN V combination for Plain Paste interfere with Win 11 sound outup/source switcher | ### Microsoft PowerToys version
0.8
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Advanced Paste
### Steps to reproduce
Trying to open sound output config window with Ctrl WIN V does nothing if Power Toys enable Plain Text pasting with same combination
### ✔️ Expected Behavior
Different keyboard combination like Alt WIN V
### ❌ Actual Behavior
Conflicting features
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,536,326,700 | PowerToys | feat: Activate region shortcut for FancyZones | ### Description of the new feature / enhancement
Provide an assignable keyboard shortcut to each zone, which will activate the topmost window in the assigned zone when pressed.
### Scenario when this would be used?
This would provide an easy way to switch applications on large workspaces. When you are working across multiple monitors, or even one very large one, you might have 5-10 applications to work with. This becomes less efficient as you have more applications that you are working with. Alternatively, you can use the mouse, but it would be nice to be able to do all this without taking your hand off the keyboard.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,536,394,251 | vscode | [Accessibility] Still cannot kill Chat read-aloud via ESC once after the event is triggered for about 5 secs |
Type: <b>Bug</b>
CC @meganrogge
In Chat, trigger read aloud. You can kill it via ESC only immediately. Just let the read-aloud progress for about 5-10 secs and try to kill it via ESC. It does not stop. Also, if you trigger the read-aloud event again after 5 secs, you will hear the doubled read-aloud events simultaneously.
VS Code version: Code - Insiders 1.94.0-insider (8d1bb84a183d8a4eb74c12ec11c0c5080a548547, 2024-09-19T05:04:09.996Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-1145G7 @ 2.60GHz (8 x 2611)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.71GB (6.67GB free)|
|Process Argv|--crash-reporter-id b05b88e5-8894-4031-ae34-fa034ebddea9|
|Screen Reader|yes|
|VM|0%|
</details><details><summary>Extensions (125)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-openapi|42C|4.28.1
zotenote|A-W|1.0.1
android-dev-ext|ade|1.4.0
aiprm-lang|AIP|0.0.2
Bookmarks|ale|13.5.0
openscad|Ant|1.2.2
spellright|ban|3.0.136
mermaid-markdown-syntax-highlighting|bpr|1.6.0
external-pdf|cha|1.2.0
doxdocgen|csc|1.4.0
vscode-markdownlint|Dav|0.53.0
vscode-eslint|dba|3.0.10
vscode-quick-select|dba|0.2.9
vscode-deno|den|3.40.0
gitlens|eam|14.6.1
EditorConfig|Edi|0.16.4
prettier-vscode|esb|10.1.0
figma-vscode-extension|fig|0.3.5
vscode-firefox-debug|fir|2.9.10
shell-format|fox|7.2.5
vscode-google-translate|fun|1.4.13
codespaces|Git|1.17.3
copilot|Git|1.231.1112
copilot-chat|Git|0.21.2024091902
remotehub|Git|0.64.0
vscode-github-actions|git|0.26.2
vscode-pull-request-github|Git|0.97.2024091904
cloudcode|goo|2.17.0
overleaf-workshop|iam|0.13.2
cslpreview|igo|0.2.2
path-autocomplete|ion|1.25.0
latex-workshop|Jam|10.4.0
lilypond-syntax|jea|0.1.1
scheme|jea|0.2.0
better-cpp-syntax|jef|1.17.2
commitlint|jos|2.6.0
language-julia|jul|1.123.1
google-search|kam|0.0.1
vscode-lua-format|Koi|1.3.8
lilypond-formatter|lhl|0.2.3
lilypond-pdf-preview|lhl|0.2.8
lilypond-snippets|lhl|0.1.1
vslilypond|lhl|1.7.3
language-matlab|Mat|1.2.5
git-graph|mhu|1.30.0
azure-dev|ms-|0.8.3
vscode-azureappservice|ms-|0.25.3
vscode-azurecontainerapps|ms-|0.6.1
vscode-azurefunctions|ms-|1.15.4
vscode-azureresourcegroups|ms-|0.8.3
vscode-azurestaticwebapps|ms-|0.12.2
vscode-azurestorage|ms-|0.16.1
vscode-azurevirtualmachines|ms-|0.6.5
vscode-cosmosdb|ms-|0.23.0
vscode-docker|ms-|1.29.2
vscode-edge-devtools|ms-|2.1.6
black-formatter|ms-|2024.3.12071014
debugpy|ms-|2024.11.2024082901
flake8|ms-|2023.13.12291011
isort|ms-|2023.13.12321012
python|ms-|2024.15.2024091901
vscode-pylance|ms-|2024.9.101
jupyter|ms-|2024.9.2024091901
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.8
vscode-jupyter-slideshow|ms-|0.1.5
remote-containers|ms-|0.386.0
remote-ssh|ms-|0.115.2024091615
remote-ssh-edit|ms-|0.86.0
remote-wsl|ms-|0.81.8
vscode-remote-extensionpack|ms-|0.25.0
azure-account|ms-|0.12.0
azure-repos|ms-|0.40.0
cmake-tools|ms-|1.19.51
cpptools|ms-|1.22.3
cpptools-extension-pack|ms-|1.3.0
js-debug-nightly|ms-|2024.9.1817
remote-explorer|ms-|0.5.2024011009
remote-repositories|ms-|0.42.0
remote-server|ms-|1.6.2024011109
vscode-github-issue-notebooks|ms-|0.0.130
vscode-node-azure-pack|ms-|1.2.0
vscode-selfhost-test-provider|ms-|0.3.25
vscode-serial-monitor|ms-|0.13.1
vscode-speech|ms-|0.10.0
vscode-speech-language-pack-en-ca|ms-|0.4.0
vscode-speech-language-pack-en-gb|ms-|0.4.0
vscode-speech-language-pack-ko-kr|ms-|0.4.0
vsliveshare|ms-|1.0.5940
windows-ai-studio|ms-|0.5.2024091809
autodocstring|njp|0.6.1
pandocciter|not|0.10.4
typst-lsp|nva|0.13.0
publisher|pos|1.1.6
shiny|Pos|1.1.0
shinyuieditor|pos|0.5.0
quarto|qua|1.114.0
r-debugger|RDe|0.5.5
java|red|1.34.0
vscode-xml|red|0.27.1
vscode-yaml|red|1.14.0
r|REd|2.8.4
multi-command|ryu|1.6.0
AudioQ|Seh|0.0.2
vscode-deepl|soe|1.1.1
abc-music|sof|0.4.0
lua|sum|3.10.6
latex-utilities|tec|0.4.14
cmake|twx|0.0.17
vscode-terminal-here|Tyr|0.2.4
windows-terminal|Tyr|0.7.0
errorlens|use|3.16.0
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.2.30
vscode-conventional-commits|viv|1.26.0
vscode-arduino|vsc|0.7.1
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.0
vscode-java-dependency|vsc|0.24.0
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.42.0
vscode-maven|vsc|0.44.0
markdown-all-in-one|yzh|3.6.2
grammarly|znc|0.25.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
vsaa593:30376534
py29gd2263:31024238
c4g48928:30535728
a9j8j154:30646983
962ge761:30841072
pythongtdpath:30726887
welcomedialog:30812478
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30870582
dsvsc016:30879898
dsvsc017:30880771
dsvsc018:30880772
cppperfnew:30980852
pythonait:30973460
01bff139:31013167
a69g1124:31018687
dvdeprecation:31040973
dwnewjupyter:31046869
impr_priority:31057980
nativerepl1:31134653
refactort:31084545
pythonrstrctxt:31093868
flighttreat:31119334
wkspc-onlycs-t:31132770
nativeloc1:31118317
wkspc-ranged-c:31125598
cf971741:31111988
jh802675:31132134
e80f6927:31120813
autoexpandse:31133494
ei213698:31121563
12bdf347:31141542
notype1:31136707
c9j82188:31138334
showbadge:31139796
f8igb616:31140137
```
</details>
<!-- generated by issue reporter --> | bug,upstream,workbench-voice | low | Critical |
2,536,405,645 | yt-dlp | [ADN] Unable to log in: HTTP Error 403: Forbidden / Requested format is not available. Use --list-formats for a list of available formats | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
france
### Provide a description that is worded well enough to be understood
Hello,
I am living in France. I am facing issues to download from ADN
here is th list of format available for the video i want to download :
[debug] Command-line config: ['-vU', '-u', 'PRIVATE', '-p', 'PRIVATE', '-F', 'https://animationdigitalnetwork.com/video/647-dragon-quest-the-adventure-of-dai-fly-daibouken/12718-episode-1']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out cp1252 (No VT), error utf-8, screen cp1252 (No VT)
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[ADN] Logging in
[ADN] Extracting URL: https://animationdigitalnetwork.com/video/647-dragon-quest-the-adventure-of-dai-fly-daibouken/12718-episode-1
[ADN] 12718: Downloading player config JSON metadata
[ADN] 12718: Downloading access token
[ADN] 12718: Downloading links JSON metadata
[ADN] 12718: Downloading vf mobile JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vf sd JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vf hd JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vf fhd JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vf auto JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vostf mobile JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vostf sd JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vostf hd JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vostf fhd JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vostf auto JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading additional video metadata
[info] Available formats for 12718:
ID EXT RESOLUTION | FILESIZE TBR PROTO | VCODEC ACODEC MORE INFO
---------------------------------------------------------------------------------------
vostf-416 mp4 640x360 | ~ 71.51MiB 416k m3u8 | avc1.4d001f mp4a.40.2
vf-768 mp4 640x360 | ~ 132.16MiB 769k m3u8 | avc1.4d001f mp4a.40.2 [fr]
vf-696-0 mp4 853x480 | ~ 119.64MiB 696k m3u8 | avc1.640028 mp4a.40.2 [fr]
vf-696-1 mp4 853x480 | ~ 119.64MiB 696k m3u8 | avc1.640028 mp4a.40.2 [fr]
vostf-696-0 mp4 853x480 | ~ 119.64MiB 696k m3u8 | avc1.640028 mp4a.40.2
vostf-696-1 mp4 853x480 | ~ 119.64MiB 696k m3u8 | avc1.640028 mp4a.40.2
vf-3129-0 mp4 1280x720 | ~ 537.92MiB 3129k m3u8 | avc1.640028 mp4a.40.2 [fr]
vf-3129-1 mp4 1280x720 | ~ 537.92MiB 3129k m3u8 | avc1.640028 mp4a.40.2 [fr]
vostf-3129-0 mp4 1280x720 | ~ 537.92MiB 3129k m3u8 | avc1.640028 mp4a.40.2
vostf-3129-1 mp4 1280x720 | ~ 537.92MiB 3129k m3u8 | avc1.640028 mp4a.40.2
vf-5829-0 mp4 1920x1080 | ~1002.04MiB 5829k m3u8 | avc1.640029 mp4a.40.2 [fr]
vf-5829-1 mp4 1920x1080 | ~1002.04MiB 5829k m3u8 | avc1.640029 mp4a.40.2 [fr]
vostf-5829-0 mp4 1920x1080 | ~1002.04MiB 5829k m3u8 | avc1.640029 mp4a.40.2
vostf-5829-1 mp4 1920x1080 | ~1002.04MiB 5829k m3u8 | avc1.640029 mp4a.40.2
here is when i try donwload :
[debug] Command-line config: ['-vU', '-u', 'PRIVATE', '-p', 'PRIVATE', '-f', 'vostf-5829-1', '--video-multistreams', '--audio-multistreams', '--sub-langs', 'all', '--embed-subs', '--embed-chapters', '--embed-metadata', '--remux-video', 'mkv', '--ffmpeg-location', 'E:\\yt-dlp\\ffmpeg-master-latest-win64-gpl\\bin\\ffmpeg.exe', '-o', 'E:\\ADN\\%(title)s vostf.%(ext)s', 'https://animationdigitalnetwork.com/video/647-dragon-quest-the-adventure-of-dai-fly-daibouken/12718-episode-1']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out cp1252 (No VT), error utf-8, screen cp1252 (No VT)
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-117043-g8707c8660d-20240915 (setts), ffprobe N-117043-g8707c8660d-20240915
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
WARNING: [ADN] Unable to log in: HTTP Error 403: Forbidden
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
ERROR: [ADN] 12718: Requested format is not available. Use --list-formats for a list of available formats
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1782, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1841, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 2977, in process_video_result
yt_dlp.utils.ExtractorError: [ADN] 12718: Requested format is not available. Use --list-formats for a list of available formats
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[ADN] Logging in
[ADN] Extracting URL: https://animationdigitalnetwork.com/video/647-dragon-quest-the-adventure-of-dai-fly-daibouken/12718-episode-1
[ADN] 12718: Downloading player config JSON metadata
[ADN] 12718: Downloading access token
[ADN] 12718: Downloading links JSON metadata
[ADN] 12718: Downloading vostf mobile JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vostf sd JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading additional video metadata
[ADN] 12718: Downloading subtitles location
[ADN] 12718: Downloading subtitles data
[info] 12718: Downloading subtitles: fr
It seems try to me because if i try several time. 9 times it fails et the 10th time it can work and i can download
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-u', 'PRIVATE', '-p', 'PRIVATE', '-f', 'vostf-5829-1', '--video-multistreams', '--audio-multistreams', '--sub-langs', 'all', '--embed-subs', '--embed-chapters', '--embed-metadata', '--remux-video', 'mkv', '--ffmpeg-location', 'E:\\yt-dlp\\ffmpeg-master-latest-win64-gpl\\bin\\ffmpeg.exe', '-o', 'E:\\ADN\\%(title)s vostf.%(ext)s', 'https://animationdigitalnetwork.com/video/647-dragon-quest-the-adventure-of-dai-fly-daibouken/12718-episode-1']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out cp1252 (No VT), error utf-8, screen cp1252 (No VT)
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-117043-g8707c8660d-20240915 (setts), ffprobe N-117043-g8707c8660d-20240915
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
WARNING: [ADN] Unable to log in: HTTP Error 403: Forbidden
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
ERROR: [ADN] 12718: Requested format is not available. Use --list-formats for a list of available formats
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1782, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1841, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 2977, in process_video_result
yt_dlp.utils.ExtractorError: [ADN] 12718: Requested format is not available. Use --list-formats for a list of available formats
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[ADN] Logging in
[ADN] Extracting URL: https://animationdigitalnetwork.com/video/647-dragon-quest-the-adventure-of-dai-fly-daibouken/12718-episode-1
[ADN] 12718: Downloading player config JSON metadata
[ADN] 12718: Downloading access token
[ADN] 12718: Downloading links JSON metadata
[ADN] 12718: Downloading vostf mobile JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading vostf sd JSON metadata
[ADN] 12718: Downloading m3u8 information
[ADN] 12718: Downloading additional video metadata
[ADN] 12718: Downloading subtitles location
[ADN] 12718: Downloading subtitles data
[info] 12718: Downloading subtitles: fr
```
| account-needed,geo-blocked,site-bug,triage,can-share-account | low | Critical |
2,536,448,649 | ollama | Moshi /moshiko /moshika speech text foundation LLM by KyutAI | Hi,
KyutAI finally released their moshi model. It's a "speech text foundational model", kinda like the promised gpt-4o but it outputs text at the same time as speech.
Here's their link: https://github.com/kyutai-labs/moshi
Here's the hf link: https://huggingface.co/collections/kyutai/moshi-v01-release-66eaeaf3302bef6bd9ad7acd
It exists in several version and some are already quantized | model request | low | Minor |
2,536,551,306 | rust | cargo doc --examples doesn't pick up default features or report failure | I think this is all working correctly, but I'd like to record the issue in case it trips up anyone else.
If you have a project with default features, and an example requires one of those default features to be enabled, then `cargo doc --examples` will give an overly general warning, fail to generate example documentation, and possibly fail to complete other generation tasks, but the invocation will report success.
E.g., suppose the correct invocation to run an example is `cargo run --example hello-world --features=defaultfeature`. If you omit the `--features` flag you'd see an error like this:
```bash
error: target `hello-world` in package `mypackage` requires the features: `defaultfeature`
Consider enabling them by passing, e.g., `--features="defaultfeature"`
```
This tells you exactly what to do, and all is good. But if you try a related `cargo doc` command `cargo doc --examples`, you get this error:
```bash
warning: target filter `examples` specified, but no targets matched; this is a no-op
```
This also makes sense; it's a batch command that doesn't have a `cargo run` equivalent, so it reasonably suppresses the error messages that caused the build of one example bin in the batch to fail.
But if a developer isn't really paying attention, a document-the-world command like `cargo doc --no-deps --examples --workspace` will appear to succeed, but generate only some of the expected documentation (in my case, it built my subcrate documentation but not the top-level crate, which was just enough for my build script to succeed without my noticing that it had actually failed to complete).
Apologies for this long-winded report; there might be a more concise description of undesirable behavior (perhaps "cargo doc --examples --workspace fails to complete if required features aren't specified"?). I have solved the issue for my own purposes and wanted to spew the whole story here in case it provides more search terms for someone else to find it. | T-rustdoc,A-rustdoc-scrape-examples | low | Critical |
2,536,561,251 | go | crypto: obtain a FIPS 140-3 validation | ## Background
FIPS 140 is a set of U.S. Government requirements for cryptographic modules. A number of companies must comply with them, for example as part of a broader FedRAMP compliance posture. (If that's not you, you can ignore this. Run!)
Current solutions for Go program compliance are based on cgo, and replace some of the crypto packages internals with FIPS 140 validated non-memory safe modules. These solutions come with varying levels of support (for example the Go+BoringCrypto solution is not officially supported and its compliance profile is left to the user to assess), introduce memory unsafe code, sometimes delay Go version updates, can have performance issues, affect the developer experience (for example inhibiting cross-compilation), and their compliance profile is debatable. As Go is adopted more and more in regulated settings, this is going to affect Go's adoption and developer experience.
## The Go FIPS module
We plan to pursue a FIPS 140-3 validation for the NIST approved components of the Go standard library. The resulting module will be distributed as part of the standard library under the same license as the rest of the Go project, and will be transparently used by the relevant standard library packages with no API changes (wherever possible).
Users will be able to select the module to use at build time, for example choosing between a certified version, a version in the In Process list, or the latest unvalidated update. Moreover, we'll provide some mechanism for applications to disable the use of non-approved algorithms and modes at runtime.
## Further planning details
The goal is shipping the module as part of Go 1.24, assuming our validation strategy is successful. This is the first time as far as we know that a Go library (or any non-Java memory safe library) is validated.
Unless completely unavoidable, we'll not compromise on security to achieve compliance. For example, we will inject random bytes from the kernel as additional input per SP 800-90Ar1, Section 8.7.2, every time we use the mandatory DRBG, and we'll use a dedicated DRBG for ECDSA to implement a "hedged" nonce generation equivalent to what crypto/ecdsa does now (safer than both NIST options of fully random and deterministic). Also, we'll try to add minimal complexity to regular non-FIPS builds.
NIST approved packages will be prioritized in being moved to the standard library (#65269) to get validated along the rest.
We'll test at least on Linux on amd64 and arm64. Further details will be available later in the process. (If you have specific requirements, please inquire about becoming a sponsor, see below.)
We aim to deprecate and hopefully remove Go+BoringCrypto once the module lands.
After the initial validation, we plan to revalidate at least every year, and every time a CVE affects the module with no standard library-side mitigation.
All work will be done on Gerrit, tracked in the issue tracker, and the testing harnesses will be committed in the tree.
This is an umbrella issue to track related issues and CLs, and to provide updates to the community. We'll file separate proposals for the exact build-time settings, for the FIPS-only policy mechanism, for any new APIs, and for any behavior changes.
We have started working with a CMVP testing laboratory, and contracted @cpu to help. **This is an industry-sponsored effort that I (@FiloSottile) am leading as an independent maintainer, not a Google or Go team project** (although it is coordinated with the Go team and @golang/security). We're funded by a few major stakeholders, and we're available to accept sponsorships and offer commercial support (reach out to filippo@golang.org if interested). | umbrella | high | Major |
2,536,571,715 | kubernetes | Versioned feature gate lint error need to provide clear directions on what to do | When migrating `RetryGenerateName` from unversioned to versioned, I removed the feature gate from `pkg/features/kube_features.go` and added it to `/pkg/features/kube_features.go`.
I then ran:
```
hack/update-featuregates.sh
```
And got the following error:
```
found 40 features in FeatureSpecMap var defaultKubernetesFeatureGates in file: /home/jpbetz/projects/kubernetes/pkg/features/kube_features.go
found 2 features in FeatureSpecMap var defaultKubernetesFeatureGates in file: /home/jpbetz/projects/kubernetes/staging/src/k8s.io/apiextensions-apiserver/pkg/features/kube_features.go
found 31 features in FeatureSpecMap var defaultKubernetesFeatureGates in file: /home/jpbetz/projects/kubernetes/staging/src/k8s.io/apiserver/pkg/features/kube_features.go
found 3 features in FeatureSpecMap of func featureGates in file: /home/jpbetz/projects/kubernetes/staging/src/k8s.io/component-base/logs/api/v1/kube_features.go
found 1 features in FeatureSpecMap of func featureGates in file: /home/jpbetz/projects/kubernetes/staging/src/k8s.io/component-base/metrics/features/kube_features.go
found 1 features in FeatureSpecMap var cloudPublicFeatureGates in file: /home/jpbetz/projects/kubernetes/staging/src/k8s.io/controller-manager/pkg/features/kube_features.go
panic: feature RetryGenerateName changed with diff: cmd.featureInfo{
Name: "RetryGenerateName",
FullName: "",
VersionedSpecs: []cmd.featureSpec{
{
Default: true,
- LockToDefault: false,
+ LockToDefault: true,
- PreRelease: "Beta",
+ PreRelease: "GA",
Version: "",
},
},
}
goroutine 1 [running]:
k8s.io/kubernetes/test/featuregates_linter/cmd.updateFeatureListFunc(0xc0001b6d00?, {0x695a38?, 0x4?, 0x6959e8?})
/home/jpbetz/projects/kubernetes/test/featuregates_linter/cmd/feature_gates.go:108 +0x91
github.com/spf13/cobra.(*Command).execute(0xc0001c4908, {0x8c7e00, 0x0, 0x0})
/home/jpbetz/projects/kubernetes/vendor/github.com/spf13/cobra/command.go:989 +0xa91
github.com/spf13/cobra.(*Command).ExecuteC(0x8a3cc0)
/home/jpbetz/projects/kubernetes/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
/home/jpbetz/projects/kubernetes/vendor/github.com/spf13/cobra/command.go:1041
k8s.io/kubernetes/test/featuregates_linter/cmd.Execute()
/home/jpbetz/projects/kubernetes/test/featuregates_linter/cmd/root.go:32 +0x1a
main.main()
/home/jpbetz/projects/kubernetes/test/featuregates_linter/main.go:22 +0xf
exit status 2
```
The error is caused by the feature still being present in `staging/src/k8s.io/apiserver/pkg/features/kube_features.go`, but nothing about the linter error states this, nor does it explain _why_ the panic was raised. Note that the linter has a comment in the code explaining why the error is returned, but that information is not surfaced to end users in the error:
https://github.com/kubernetes/kubernetes/blob/ae945462fb2d12a4e38d074de8fe77267460624b/test/featuregates_linter/cmd/feature_gates.go#L152-L159 | sig/api-machinery,triage/accepted | low | Critical |
2,536,642,126 | vscode | Settings : `editor.cursorSurroundingLines` misleading default |
Type: <b>Bug</b>
Settings : `editor.cursorSurroundingLines` displays the default as being 0 while it is in fact 5. any input between 0 (included) and 4 (included) seems to be ignored anyway and processed as being "5"
VS Code version: Code 1.93.0 (4849ca9bdf9666755eb463db297b69e5385090e3, 2024-09-04T13:02:38.431Z)
OS version: Linux x64 6.10.6-200.fc40.x86_64
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 5800H with Radeon Graphics (16 x 4441)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|2, 2, 1|
|Memory (System)|13.49GB (6.67GB free)|
|Process Argv|--crash-reporter-id 65116bd1-1c78-4041-a566-c113afd1dba7|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|gnome|
|XDG_CURRENT_DESKTOP|GNOME|
|XDG_SESSION_DESKTOP|gnome|
|XDG_SESSION_TYPE|wayland|
</details><details><summary>Extensions (27)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-neovim|asv|1.18.12
github-markdown-preview|bie|0.3.0
markdown-checkbox|bie|0.4.0
markdown-emoji|bie|0.3.0
markdown-footnotes|bie|0.1.1
markdown-mermaid|bie|1.25.0
markdown-preview-github-styles|bie|2.1.0
markdown-yaml-preamble|bie|0.1.0
gitlens|eam|2024.9.1905
codespaces|Git|1.17.3
copilot|Git|1.231.0
copilot-chat|Git|0.20.2
remotehub|Git|0.64.0
vscode-github-actions|git|0.26.5
vscode-pull-request-github|Git|0.97.2024091904
debugpy|ms-|2024.10.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.9.1
remote-containers|ms-|0.386.0
remote-ssh|ms-|0.115.2024091615
remote-ssh-edit|ms-|0.86.0
vscode-remote-extensionpack|ms-|0.25.0
azure-repos|ms-|0.40.0
remote-explorer|ms-|0.5.2024081309
remote-repositories|ms-|0.42.0
remote-server|ms-|1.6.2024081909
pdf|tom|1.2.2
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
da93g388:31013173
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31134774
wkspc-onlycs-t:31132770
wkspc-ranged-t:31125599
fje88620:31121564
```
</details>
<!-- generated by issue reporter --> | bug,editor-core | low | Critical |
2,536,698,256 | rust | `#[diagnostic::on_unimplemented]` fails to trigger when certain impls are present | Consider the following code:
```rust
#[diagnostic::on_unimplemented(note = "A custom Foo message!")]
trait Foo {}
#[diagnostic::on_unimplemented(note = "A custom Bar message!")]
trait Bar {}
impl<'a, T> Bar for &'a T {}
fn takes_foo<T: Foo>(_t: T) {}
fn takes_bar<T: Bar>(_t: T) {}
fn main() {
takes_foo(());
takes_bar(());
}
```
I would expect the `#[diagnostic::on_unimplemented]` note to be present in the error output for both calls in `main`. However, what I get instead is:
```text
error[E0277]: the trait bound `(): Foo` is not satisfied
--> src/main.rs:13:15
|
13 | takes_foo(());
| --------- ^^ the trait `Foo` is not implemented for `()`
| |
| required by a bound introduced by this call
|
= note: A custom Foo message!
help: this trait has no implementations, consider adding one
--> src/main.rs:2:1
|
2 | trait Foo {}
| ^^^^^^^^^
note: required by a bound in `takes_foo`
--> src/main.rs:9:17
|
9 | fn takes_foo<T: Foo>(_t: T) {}
| ^^^ required by this bound in `takes_foo`
error[E0277]: the trait bound `(): Bar` is not satisfied
--> src/main.rs:14:15
|
14 | takes_bar(());
| --------- ^^ the trait `Bar` is not implemented for `()`
| |
| required by a bound introduced by this call
|
note: required by a bound in `takes_bar`
--> src/main.rs:10:17
|
10 | fn takes_bar<T: Bar>(_t: T) {}
| ^^^ required by this bound in `takes_bar`
help: consider borrowing here
|
14 | takes_bar(&());
| +
```
While the custom note is emitted for `Foo`, it's not for `Bar`. The culprit seems to be the impl of `Bar` for `&T`, which leads rustc down the wrong path and has it suggest only that borrowing is an option. However, that's [not always the right suggestion](https://github.com/google/zerocopy/issues/1296).
cc https://github.com/google/zerocopy/issues/1296, https://github.com/google/zerocopy/pull/1682 | A-diagnostics,T-compiler,A-suggestion-diagnostics,D-diagnostic-infra | low | Critical |
2,536,796,932 | rust | Tracking Issue for assorted compiletest maintenance | This is a tracking issue for a series of `compiletest` cleanups. This tracking issue is on-going and will be edited as suitable to reflect the next steps. Tasks should be broken up into small actionable items.
### Motivation
Currently `compiletest` is a bit of a mess causing it to be really hard to maintain. Let's try to do some housekeeping in `compiletest` to make it easier to maintain.
### Phase 1: `compiletest/src/runtest.rs` cleanups
- [x] Step 1: Break up `compiletest/src/runtest.rs` into smaller helper modules. (https://github.com/rust-lang/rust/pull/130566)
- Originally https://github.com/rust-lang/rust/issues/89475
- [x] Step 2: Investigate and rework how `valgrind` test suites are handled, namely what happens if `valgrind` is not available.
- Deleted the `run-pass-valgrind` test suite and valgrind support in #131351.
- https://github.com/rust-lang/rustc-dev-guide/pull/2091
- [ ] Step 3: Relocate functions on `TestCx` that does not need to be on `TestCx` (especially ones that don't depend on `TestCx` itself) to suitable locations.
- [ ] Step 4: Reorganize methods on `TestCx`:
- Step 4.1: Privatize methods only used by a specific test suite/mode to their specific helper modules.
- Step 4.2: Reorder/regroup methods on the core `TestCx` in `runtest.rs` to make it easier to navigate.
- [ ] Step 5: Improve documentation around `runtest.rs`:
- [x] Step 5.1: Make sure tool docs are registered for compiletest
- https://github.com/rust-lang/rust/issues/130564
- https://github.com/rust-lang/rust/pull/130567
- [ ] Step 5.2: Document util and helper methods on `TestCx`.
- [ ] Step 5.3: Document individual test suites/modes.
- [ ] Step 5.4: Document top-level `TestCx` and types/concepts in `runtest.rs`.
- [ ] Step 5.5: Update `rustc-dev-guide` docs about the individual test modes/suites and about test running.
- [ ] Step 5.6: Add an example in `rustc-dev-guide` about how to add a new test suite/mode.
- [ ] Step 6: Review implementation of each test suite/mode.
### Phase 2: Rework compiletest error handling and error reporting
- Step 1: Investigate how `compiletest` currently handles errors and reports them.
- Step 2: Come up with a design to make `compiletest` error reporting more cohesive and more helpful for users.
- TODO
### Phase 3: Rework directive handling
- [ ] Step 1: Survey existing directive handling related bugs.
- [x] Step 1.1: Creating a tracking issue: https://github.com/rust-lang/rust/issues/131425.
- [ ] Step 1.2: Write a document to describe current limitations/problems with how directives are handled.
- [ ] Step 2: Redesign how directives are parsed and handled.
- [ ] Step 2.1: Write a document to describe the considerations for directive handling.
- [ ] Step 2.2: Draft a prototype toy impl for the reworked handling would look like.
- [ ] Step 2.3: Draft a MCP to propose implementing the rework.
- @tgross35 has some experimentation in https://github.com/rust-lang/rust/pull/128070, this should get an MCP to receive more feedback from other compiler team members (TODO(@jieyouxu): write-up an MCP)
- ~~https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Test.20header.20commands.20could.20just.20be.20toml.3F proposes we can use structured TOML directives in doc comments.~~ non-starter as it makes writing tests quite annoying.
- [ ] Step 3: Implement more robust directive handling.
- [ ] Step 3.1: Implement the more robust design but not merge yet, as we need to...
- [ ] Step 3.2: ... find out which tests contain invalid directives w.r.t. new directive handling and fix them.
- [ ] Step 3.3: Investigate and improve testing for directive handling.
- [ ] Step 3.4: Try to land the improved directive handling.
- [ ] Step 4: Improve directive documentation in source and in rustc-dev-guide:
- [x] Step 4.1: Unify terminology: stick with "directive", not also "header" or "comment" etc.
- https://github.com/rust-lang/rustc-dev-guide/pull/2089
- [ ] Step 4.2: Document individual directives: syntax, applicable to which test suites, behavior
- https://github.com/rust-lang/rustc-dev-guide/pull/2089
- This can be improved once directive handling is more robust
- TODO
There are more phases intended, but they are to be planned.
### Discussions
Rubber-ducking thread: https://rust-lang.zulipchat.com/#narrow/stream/326414-t-infra.2Fbootstrap/topic/.28Rubberducking.29.20compiletest.20test.20discovery.20.2F.20directives | C-cleanup,T-compiler,T-bootstrap,C-tracking-issue,A-compiletest | low | Critical |
2,536,865,669 | godot | C# Exported Array<Node> is null | ### Tested versions
4.3 stable
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 Ti (NVIDIA; 31.0.15.3623) - AMD Ryzen 7 2700X Eight-Core Processor (16 Threads)
### Issue description
I have a larger project, that uses C# for all the scripts. Everything works fine when I start the game from the editor. When I export it, I get several errors, that I was able to trace back to that an exported variable is null, instead of being set to the value that is stored in the scene.
This applies to exported variables, that are made of a Godot.Collections.Array<Node>, where <Node> can be Node or any other Script that inherits from Node.
If I replace the Array<Node> with Node[], the export is working fine.
### Steps to reproduce
Unfortunately, I was not able to create a MRP, as the issue did not persist in a minimal project. So unfortunately, I do not know how to reproduce, but I can share the code with one or two people who want to investigate the issue.
### Minimal reproduction project (MRP)
I was not able to create a MRP and don't want to publically share my project, but I can share the code with one or two persons who want to investigate the issue. | needs testing,topic:dotnet | low | Critical |
2,536,873,578 | flutter | [GoRouter] Push() vs Go() different behavior when there is ModalRoute | ### Steps to reproduce
Fully reproducible example [https://github.com/petrnymsa/GoRouter-pop-vs-go](https://github.com/petrnymsa/GoRouter-pop-vs-go)
We have Routes: Home (/home) and Profiles (/profiles), with initial route /profiles.
on Home route I have RouteAware observer.
- From Profiles, we always navigate to /home with .go()
- From home, I want to navigate to the profile with push(), allowing the user to either switch profiles or go back home.
When the user switches profiles, I would expect either didPush() or didPopNext() to be triggered.
I have reproducible examples where are several scenarios
1. Click on any Profile - confirm dialog appears (this simulates loading dialog before user is logged in) - confirm - you are redirected to home. Notice **didPush** got called.
2. Click on button PUSH profiles
3. Return back - notice **didPopNext()** got called.
4. Click on button PUSH profiles again
5. Click on some profile - confirm dialog - you are on home. Notice that **didPush** nor **didPopNext** not get called.
Different flows with GO to Profiles.
If same process is repeated but with GO profiles, didPush is called always.
**REMOVE dialog**
If showDialog is commented out behavior changes. Now **didPopNext** is always called, even in scenario PUSH profiles.
**ADD Future.delayed**
If you add Future.delayed() after dialog is closed and before context.go is called, again **didPopNext** is called even in scenario PUSH profiles.
### Expected results
When repeat same process in step 5 I would expect at least didPopNext to be called.
### Actual results
None of methods are called
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
void main() {
runApp(const MyApp());
}
final observer = RouteObserver<ModalRoute>();
final router = GoRouter(initialLocation: '/profiles', observers: [
observer
], routes: [
GoRoute(path: '/profiles', builder: (context, state) => const ProfilesPage()),
GoRoute(path: '/home', builder: (context, state) => const HomePage(), routes: [
GoRoute(path: 'messages', builder: (context, state) => const UnderHomePage()),
]),
]);
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: router,
);
}
}
class HomePage extends StatefulWidget {
const HomePage({super.key});
@override
State<HomePage> createState() => _HomePageState();
}
class ProfilesPage extends StatelessWidget {
const ProfilesPage({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Profiles'),
),
body: ListView(
children: [
...List.generate(
5,
(i) => ListTile(
onTap: () async {
// COMMENT this showDialog and didPopNext on Home is always triggered.
await showDialog(
context: context,
builder: (context) => AlertDialog(
content: Text('Profile $i'),
actions: [
TextButton(
onPressed: () => Navigator.of(context).pop(),
child: const Text('Close'),
),
],
));
// IF added delayed with dialog together, didPopNext() is triggered again
// await Future.delayed(const Duration(seconds: 1));
if (context.mounted) {
context.go('/home');
}
},
title: Text('Profile $i'),
trailing: const Icon(Icons.chevron_right),
),
),
],
),
);
}
}
class _HomePageState extends State<HomePage> with RouteAware {
@override
void dispose() {
observer.unsubscribe(this);
super.dispose();
}
@override
void didChangeDependencies() {
observer.subscribe(this, ModalRoute.of(context)!);
super.didChangeDependencies();
}
@override
void didPush() {
print('HOME: Did push');
super.didPush();
}
@override
void didPopNext() {
print('HOME: didPopNext');
super.didPopNext();
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisSize: MainAxisSize.min,
children: [
// In this case I expect only didPopNext() to be called
ElevatedButton(
onPressed: () {
context.push('/profiles');
},
child: const Text('PUSH profiles'),
),
// In this case I expect only didPush() to be called
ElevatedButton(
onPressed: () {
context.go('/profiles');
},
child: const Text('GO profiles'),
),
// In this case I expec to didPopNext() to be called
ElevatedButton(
onPressed: () {
context.push('/home/messages');
},
child: const Text('PUSH messages'),
),
// In this case I expec to didPopNext() to be called
ElevatedButton(
onPressed: () {
context.go('/home/messages');
},
child: const Text('GO messages'),
),
],
),
),
);
}
}
class UnderHomePage extends StatelessWidget {
const UnderHomePage({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(),
body: const Text('Messages'),
);
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.22.2, on Microsoft Windows [Version 10.0.22631.4169], locale en-US)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.8.3)
[√] Android Studio (version 2021.3)
[√] Android Studio (version 2022.3)
[√] VS Code (version 1.93.1)
[√] Connected device (4 available)
[√] Network resources
• No issues found!
```
</details>
| package,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.26 | low | Major |
2,536,905,859 | electron | [Feature Request]: `setAutoResize` support for View/WebContentsView | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
WebContentsView does not currently support the ability to configure that it should automatically resize with its parent window. This was available on BrowserViews via `setAutoResize`.
### Proposed Solution
Expose a `setAutoResize` instance method on View/WebContentsView that automatically resizes the View when the dimensions of the parent window changes.
### Alternatives Considered
Currently using an event listener on the parent window's resize event to manually resize the WebContentsView.
```
this.browserWindow.on("resize", () => {
const bounds = this.browserWindow.getBounds()
this.webContentsView.setBounds({
x: 0,
y: 0,
width: bounds.width,
height: bounds.height,
})
})
```
### Additional Information
_No response_ | enhancement :sparkles:,component/WebContentsView | low | Major |
2,536,917,106 | go | proposal: x/tools: tag and delete refactor/rename, refactor/importgraph, go/buildutil, cmd/gomvpkg | While implementing this proposal:
- https://github.com/golang/go/issues/69360
I immediately realized that most of the logic is not in the cmd package but in the refactor/rename package, which has never worked with Go modules. Similarly, refactor/importgraph and go/buildutil have the same limitation. And cmd/gomvpkg is in a similar state to the (now deleted) cmd/gorename.
I propose to tag and delete all of them following a similar process.
(FWIW, the only seemingly valuable part of go/buildutil is TagsFlag, which doesn't actually work in conjunction with go/packages; one must use the syntax `gopackages -buildflag=-tags=... patterns...`.) | Proposal,Tools | low | Major |
2,536,972,849 | godot | Godot crashes when node not found, and attempting to connect to a non existant signal | ### Tested versions
v4.3.stable.official.77dcf97d8
### System information
Windows 10 - Godot 4.3 stable - Vulkan 1.3.280 - Forward+
### Issue description
My 2D project is experiencing a continuous crash on open, believed to be due to a corrupted scene. The crash occurs immediately upon opening the project.
### Steps to reproduce
1. Created custom resource to store `ProjectileData`
2. Created a custom `Projectile` which extended `CharacterBody2D`, which had `@export var stats: Resource` which was the `ProjectileData` above
3. Created a `PlayerBeam` which extended `Projectile`
4. Attempted to add a `Timer` node to the base `Projectile`, but encountered issues starting the `Timer`
5. Removed the `Timer` node from `Projectile` and its connected signal functions
6. Added the `Timer` node to `PlayerBeam` instead
7. Attempted to create a new inherited scene from player_beam.tscn (`PlayerBeam`), which resulted in a crash
Stack trace found in console:
```
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
Vulkan 1.3.280 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3080
<Resource#-9223370267546291730>
ERROR: Cannot get path of node as it is not in a scene tree.
at: (scene/main/node.cpp:2257)
ERROR: Condition "!is_inside_tree()" is true. Returning: false
at: can_process (scene/main/node.cpp:835)
ERROR: Nonexistent signal: editor_description_changed.
at: (core/object/object.cpp:1441)
ERROR: In Object of type 'Object': Attempt to connect nonexistent signal 'editor_description_changed' to callable 'Scen.
at: (core/object/object.cpp:1390)
ERROR: Node not found: "" (relative to "/root/@EditorNode@16886/@Panel@13/@VBoxContainer@14/DockHSplitLeftL/DockHSplitL.
at: (scene/main/node.cpp:1792)
ERROR: Cannot get path of node as it is not in a scene tree.
at: (scene/main/node.cpp:2257)
ERROR: Condition "!is_inside_tree()" is true. Returning: false
at: can_process (scene/main/node.cpp:835)
ERROR: Nonexistent signal: editor_description_changed.
at: (core/object/object.cpp:1441)
ERROR: In Object of type 'Object': Attempt to connect nonexistent signal 'editor_description_changed' to callable 'Scen.
at: (core/object/object.cpp:1390)
ERROR: Cannot get path of node as it is not in a scene tree.
at: (scene/main/node.cpp:2257)
ERROR: Condition "!is_inside_tree()" is true. Returning: false
at: can_process (scene/main/node.cpp:835)
ERROR: Nonexistent signal: editor_description_changed.
at: (core/object/object.cpp:1441)
ERROR: In Object of type 'Object': Attempt to connect nonexistent signal 'editor_description_changed' to callable 'Scen.
at: (core/object/object.cpp:1390)
ERROR: Node not found: "" (relative to "/root/@EditorNode@16886/@Panel@13/@VBoxContainer@14/DockHSplitLeftL/DockHSplitL.
at: (scene/main/node.cpp:1792)
================================================================
CrashHandlerException: Program crashed with signal 11
Engine version: Godot Engine v4.3.stable.official (77dcf97d82cbfe4e4615475fa52ca03da645dbd8)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] error(-1): no debug info in PE/COFF executable
[2] error(-1): no debug info in PE/COFF executable
[3] error(-1): no debug info in PE/COFF executable
[4] error(-1): no debug info in PE/COFF executable
[5] error(-1): no debug info in PE/COFF executable
[6] error(-1): no debug info in PE/COFF executable
[7] error(-1): no debug info in PE/COFF executable
[8] error(-1): no debug info in PE/COFF executable
[9] error(-1): no debug info in PE/COFF executable
[10] error(-1): no debug info in PE/COFF executable
[11] error(-1): no debug info in PE/COFF executable
[12] error(-1): no debug info in PE/COFF executable
[13] error(-1): no debug info in PE/COFF executable
[14] error(-1): no debug info in PE/COFF executable
[15] error(-1): no debug info in PE/COFF executable
[16] error(-1): no debug info in PE/COFF executable
-- END OF BACKTRACE --
================================================================
```
### Minimal reproduction project (MRP)
1. Open the project.
2. Project should open normally
3. If you double click `player_beam.tscn` in the godot editor FileSystem, the project will promptly crash
4. Subsequent attempts to open the project will immediately crash
[mrp_nonexistent_signal_after_create_inherited.zip](https://github.com/user-attachments/files/17063478/mrp_nonexistent_signal_after_create_inherited.zip)
| topic:editor,needs testing,crash | low | Critical |
2,537,021,576 | TypeScript | Interface that extends another no longer constrains types like the original | ### 🔎 Search Terms
interface
interface constrain
extend interface
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about interfaces.
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/PQKhAIGUFEBUFUAK4C8b0c1z4TAFABmArgHYDGALgJYD2p4lAngA4CmAPLAHwAUAlAC5wscAG9wAJzaVikhqWIAbJeACGAZ3BkA1qVoB3BppEBucAF98zduGgA3NqUoBZNS1TgASm3K1JACYcGpSS1KQA5gA0do7O3KbWrGzgADLUIU5sklwOTpTgbAAelE4BWnnxnry8bPaUwrCVlPyo3OD2tNQBrQA+4uAAFmqkAUpszby0AEYAVr4NIs1CHV0Blon4xSz+BeGlkoRq5CmwyQHN0AC21JQHuXGu7oUlZRWPbiztYvjgf+oBC6PdKZUjZB75M62YqlUZaHRsJi0QhLD7uPi-f5YmxsRrNKFsKKYrF-JQZWHZYQgik5JpolgAbTpkOSAF1uESSf9aCwaPQNAB+YQAQUBzWpWUkAHleXRSFp+tNaLRxiNOf8Vp1uoksdIrrRHOLyZKIc4CS9YeVwAikSjmc5PhiuX8cXjHgT1VyyaDKWljWDac1Pkz8WyOcSSTy+fKhbF8hKAzLowrwEqVWw1RHNWtElZQBBoAA5AAiUDgSFQ2Cr1ZQuAI+HzvwggFByKAjW5MRhsEJNgg4rjnS43O7giSEZXCZqWbh8VoaNQ0DSEajdkSDx7XW73McTuPOafmYDAcDN-AN48AAUoGgAtNsFnfJJJ-El2AP2ED8puRzkJNM1JIk6PNOs7gPOi7LquBKfs437buA460EB+QHuAR4nmeja4Ce4DCmSJjIowgwpP41AROEaiqDivavikADCypKEOW7gvaTweDCbx7ux7S1tBzE-qaPGbP2DEqgJ8GIch+4WDOAhgQuGSQVo-EbsOkm7lOsmHsep4Rs6BmGUZxkmaZJnoVet73lQj7PpItEcGJTFqSxv6pgB0kFLJoHgUpK4qeuX7qaOCGacB2lobpmFgE24CADLk4AAJLONkRwnERC5gdQVwsEonacXCREkWE5GkJRjDJDR+ypcc9GMclBxpRMLmCWxnwWlxQboh1hWqUFrlCY6AxWA5TkNTVJwSSFUncSB8m+Uu-lrh+U1uTNWkJJFGH6WZu17ftB1GRZ153kU7A2dkdmjfVKWHLVq0cH+Hmzd582KYtUGBbBwVrWFKERehcXRSAsUJeNd3pZQwwFAVVpQ8VZEUVRyTqKMAJWiExCEIQVW3U14BjXjtWIEoxAaMWGQsAu5CDA9bXPLD7z5ENjPLWwMGUHBrFdV84gRgElPU7TjxCearM2oR9NfLU9husz7ghu6bLZtq+AjaJN2NcTpPk4LlA0w9O5IS9clzu9ylsxzXO-cbG06dth2O07zsu38x1WWdD6XS+GsquDTUk2TFMaFT+vC-1glPYBJs+ebS19d9A1G55qGA8DsWtgxLCdlTIThBERXgH4AQpOEmRqOshGw-nhfVRDKQGP4OgaLjWvpVnTCIJopRWz9g0M68hU898EYVxzCbc0r0KD1aEt2sPvA7S6yRy2aySeiS3o0lS-qT-LjJsQS7Ib1iUZyoKIpisCu-SrK-LgIqjEZqQG8qwEOr-HqBrNfGN+iyj4tESSwXkvCq7BV6UA9KArekod4+kDPSRWLJ2DH1AWffksYjTwKTOfB+qYn6ZixG-D+fwBYhyFs0f+09LTwiAfPekfA6gQODIfZWwgtTvzVmeDW2cu6ZF7knUKttwqmwUhBeOX1OZ92Tibe2elXYKMUftd2p1zqUFsj7ZIjkeSd27uzQ27lo4bVjuIz6K0WoaWEf9Ta6FmxAA
### 💻 Code
```ts
/** SETUP ========================== */
function type<T>(): T { return null as unknown as T; }
type EventMap = Record<string, Event>;
type Listener<TEvent extends Event> = ((evt: TEvent) => void) | { handleEvent(object: TEvent): void };
export interface TypedEventEmitter<TEventMap extends EventMap> {
addEventListener<TEventType extends keyof TEventMap>(
type: TEventType,
listener: Listener<TEventMap[TEventType]>,
options?: AddEventListenerOptions | boolean,
): void;
removeEventListener<TEventType extends keyof TEventMap>(
type: TEventType,
listener: Listener<TEventMap[TEventType]>,
options?: EventListenerOptions | boolean,
): void;
}
/** END SETUP ========================== */
/**
* ✅ Sanity test
*/
type<TypedEventEmitter<{ foo: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ✅
// @ts-expect-error
type<TypedEventEmitter<{ bar: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ✅
/**
* ✅ Alias of the original type
*/
type CoolEventEmitter<TEventMap extends EventMap> = TypedEventEmitter<TEventMap>;
type<CoolEventEmitter<{ foo: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ✅
// @ts-expect-error
type<CoolEventEmitter<{ bar: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ✅
/**
* ❌ Interface that simply extends the original type
*/
interface CoolInterfaceEventEmitter<TEventMap extends EventMap> extends TypedEventEmitter<TEventMap> { }
type<CoolInterfaceEventEmitter<{ foo: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ✅
// @ts-expect-error
type<CoolInterfaceEventEmitter<{ bar: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ❌
/**
* ❌ Interface that extends the original type and adds stuff
*/
interface CoolInterfacePlusDispatchEventEmitter<TEventMap extends EventMap> extends TypedEventEmitter<TEventMap> {
dispatchEvent<TEventType extends keyof TEventMap>(ev: TEventMap[TEventType]): void;
}
type<CoolInterfacePlusDispatchEventEmitter<{ foo: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ✅
// @ts-expect-error
type<CoolInterfacePlusDispatchEventEmitter<{ bar: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ❌
/**
* ✅ Copy pasting the code instead of extending the interface works
*/
interface CopyPastedEventEmitter<TEventMap extends EventMap> {
addEventListener<TEventType extends keyof TEventMap>(
type: TEventType,
listener: Listener<TEventMap[TEventType]>,
options?: AddEventListenerOptions | boolean,
): void;
removeEventListener<TEventType extends keyof TEventMap>(
type: TEventType,
listener: Listener<TEventMap[TEventType]>,
options?: EventListenerOptions | boolean,
): void;
dispatchEvent<TEventType extends keyof TEventMap>(ev: TEventMap[TEventType]): void;
}
type<CopyPastedEventEmitter<{ foo: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ✅
// @ts-expect-error
type<CopyPastedEventEmitter<{ bar: Event }>>() satisfies TypedEventEmitter<{ foo: Event }>; // ✅
```
### 🙁 Actual behavior
As soon as the original interface is extended – whether the extender adds properties, does not add properties, uses the type parameter, or doesn't use the type parameter – I seem to lose the ability to constrain types in the same way as the original interface can.
### 🙂 Expected behavior
I would expect an interface that extends another to constrain types in exactly the same way as the original.
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Critical |
2,537,129,955 | tensorflow | `tf.slice` triggers XLA recompilation on each call despite static shape, while `xla.dynamic_slice` does not | ### Issue type
Performance
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
2.18.0-dev20240919
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When using `tf.function` with `jit_compile=True`, I've observed that `tf.slice` leads to XLA recompilation at each function call, even when the tensor shape are static. Explicitly using `tensorflow.compiler.tf2xla.python.xla.dynamic_slice` instead resolves this issue and prevents recompilation.
Expected behavior: `tf.slice` should not trigger XLA recompilation when shapes are static, similar to `xla.dynamic_slice`.
Actual behavior: `tf.slice` causes XLA recompilation on each call leading to a 100x slower performance.
Question: Shouldn't `tf.slice` be automatically converted to `xla.dynamic_slice` or an equivalent XLA operation to avoid unnecessary recompilation?
### Standalone code to reproduce the issue
```shell
import time
import tensorflow as tf
import tensorflow.compiler.tf2xla.python.xla as xla
@tf.function(jit_compile=True)
def func_xla_slice(n):
range = tf.range(100)
slice = xla.dynamic_slice(range, [n], [50])
return slice
@tf.function(jit_compile=True)
def func_tf_slice(n):
range = tf.range(100)
slice = tf.slice(range, [n], [50])
return slice
if __name__ == '__main__':
print(tf.version.VERSION)
# tracing calls
func_xla_slice(tf.constant(0))
func_tf_slice(tf.constant(0))
start = time.time()
for i in range(1, 50):
func_xla_slice(tf.constant(i))
print(f"XLA slice took {(time.time() - start) * 1000:.0f} ms")
start = time.time()
for i in range(1, 50):
func_tf_slice(tf.constant(i))
print(f"TF slice took {(time.time() - start) * 1000:.0f} ms")
```
### Relevant log output
```shell
XLA slice took 19 ms
TF slice took 2016 ms
```
| stat:awaiting tensorflower,comp:ops,comp:xla,type:performance,2.17 | low | Critical |
2,537,154,628 | storybook | Stop importing all of `@storybook/icons` in `@storybook/core` | For backwards compatibility, `@storybook/icon` is imported as `*` in two places:
1. The `Icons` component in `@storybook/core/components`: https://github.com/storybookjs/storybook/blob/next/code/core/src/components/components/icon/icon.tsx/#L35-L62 this component is deprecated, and the new icons are supposed to be used and imported individually by users/addon authors
2. The globalisation in `@storybook/core/manager/globals: https://github.com/storybookjs/storybook/blob/next/code/core/src/manager/globals/runtime.ts/#L25 Which makes all of `@storybook/icons` globally available in the manager bundle. This ensures that `@storybook/icons` is only bundled in once in the manager, but ideally this shouldn't be necessary at all, if icons where imported individually instead.
Currently the `components` and `manager` entrypoints of `@storybook/core` both have all 190kb of `@storybook/icons` bundled in, which is significant given that icons could instead be imported individually, some being <1kb in size.
- `components`: https://635781f3500dd2c49e189caf-rjflqvpgls.chromatic.com/?path=/story/bench--es-build-analyzer&args=metafile:core_SLASH_components_DOT_esm_DOT_json
- `manager`: https://635781f3500dd2c49e189caf-rjflqvpgls.chromatic.com/?path=/story/bench--es-build-analyzer&args=metafile:core_SLASH_manager_DOT_esm_DOT_json
In Storybook 9.0 we should remove these wildcard imports so only the used icons are bundled in. | BREAKING CHANGE,components,core | low | Minor |
2,537,158,401 | angular | Add recommended tools/extensions as part of the getting started tutorial | ### Describe the problem that you experienced
After watching [this Twitch stream](https://www.twitch.tv/mouredev) I realized lots of folks who get started with Angular would be probably using suboptimal config of their text editor/IDE when building their first app.
We should recommend extensions/tools before people get started:
- Angular language service with VSCode
- WebStorm if folks are JetBrain users
### Enter the URL of the topic with the problem
https://angular.dev/tutorials/learn-angular
https://angular.dev/tutorials/first-app | P2,area: docs | low | Minor |
2,537,179,138 | PowerToys | Please add a gif/ png/ sprites viewer for those of us who like to work with pixel art. | ### Description of the new feature / enhancement
I was hoping to find a file explorer add-on to see a preview image of my pixel art clearly. Yet just like the windows photo viewer it is still blurry. I have searched for ways to turn off anti-aliasing or smoothing effect with no success. It would be nice if PowerToys added this feature.
### Scenario when this would be used?
This would be used when i am searching for a specific sprite created. I do Voxel art and Cross stitch pattern creations and I am pretty sure this would benefit any game developers using pixel sprites as well. Please and Thank you
### Supporting information
### Photo example of File Explorer Preview Pane

### Photo Example on a Zoomed in default Windows 11 Photo Viewer
 | Idea-New PowerToy,Needs-Triage,Product-Peek | low | Minor |
2,537,248,490 | godot | `ERROR: Parameter "data.tree" is null.` while `is_inside_tree()` is true. | ### Tested versions
4.4.dev
### System information
Windows 10
### Issue description
Produced on a custom build.
Dumping the backtrace. Please include this when reporting the bug to the project developer.
couldn't map PC to fn name
.....

This shouldn't happen! The only explanation is that it was caused by removing a child `OptionButton` while it's popup is visible and `WINDOW_EVENT_MOUSE_EXIT` was called for the popup.

### Steps to reproduce
It will be hard to provide a MRP, I'm using a custom `Inspector` and i clear all the children then i readd them again when `property_list_changed` is called, just to hide/show a property based on it's "usage".
https://github.com/user-attachments/assets/7163ee04-1159-4dc2-a9f8-7a4df1f9fcbc
### Minimal reproduction project (MRP)
N/A | bug,topic:core,needs testing,crash | low | Critical |
2,537,249,667 | rust | regression: cannot move a value of type `[u8]` | - https://crater-reports.s3.amazonaws.com/beta-1.82-rustdoc-1/beta-2024-09-05/reg/icu_locale_canonicalizer-0.6.0/log.txt
```
[INFO] [stdout] error[E0161]: cannot move a value of type `[u8]`
[INFO] [stdout] --> /opt/rustwide/cargo-home/registry/src/index.crates.io-6f17d22bba15001f/zerovec-0.8.1/src/flexzerovec/slice.rs:23:5
[INFO] [stdout] |
[INFO] [stdout] 17 | #[derive(Eq, PartialEq)]
[INFO] [stdout] | --------- in this derive macro expansion
[INFO] [stdout] ...
[INFO] [stdout] 23 | data: [u8],
[INFO] [stdout] | ^^^^^^^^^^ the size of `[u8]` cannot be statically determined
[INFO] [stdout] |
[INFO] [stdout] = note: this error originates in the derive macro `PartialEq` (in Nightly builds, run with -Z macro-backtrace for more info)
```
note: if the relevant team already accepted this breakage then this issue can be closed | regression-from-stable-to-stable | low | Critical |
2,537,293,471 | godot | StreamPeerGzip::finish() fails when compressing with Condition "err != (p_close ? 1 : 0)" is true. Returning: FAILED | ### Tested versions
Reproduced in the following versions
- v4.3.stable.official [77dcf97d8]
- v4.3.stable.mono.official [77dcf97d8]
### System information
Windows 11 Pro. i5-12600k
### Issue description
When attempting to compress some "large" data (around ~6kb json string), calling finish() fails with the following error and only the 10 byte gzip header is returned.
```
E 0:00:28:0506 global.gd:51 @ Gzip(): Condition "err != (p_close ? 1 : 0)" is true. Returning: FAILED
<C++ Source> core/io/stream_peer_gzip.cpp:115 @ _process()
```
[Link to Source](https://github.com/godotengine/godot/blob/77dcf97d82cbfe4e4615475fa52ca03da645dbd8/core/io/stream_peer_gzip.cpp#L115)
## My Code
```gdscript
static func Gzip(data: PackedByteArray, unzip := false) -> PackedByteArray:
var gzip = StreamPeerGZIP.new()
var err
if unzip:
err = gzip.start_decompression()
else:
err = gzip.start_compression()
print("GZIP Start: " + str(err))
err = gzip.put_data(data)
print("GZIP Put Data: " + str(err))
if !unzip:
err = gzip.finish() #TODO: This throws an error when output is too large
print("GZIP Finish: " + str(err))
# Get all zipped content into one array
var bytes: PackedByteArray = []
while gzip.get_available_bytes() > 0:
var res = gzip.get_partial_data(65535)
if res[0] != 0:
push_error("Error processing gzip data")
return []
bytes.append_array(res[1])
return bytes
```
## Output When Error Occurs
```
GZIP Start: 0
GZIP Put Data: 0
GZIP Finish: 1
```
### Steps to reproduce
In my tests, this happens when the compressed **output** data is larger than 1034 bytes. In my case, this happened when trying to compress anything larger than ~6kb of json string data (with no whitespace)
Simply call the Gzip() func above, passing in data that will compress to anything larger than 1034 bytes
[lorem_bad.txt](https://github.com/user-attachments/files/17065890/lorem_bad.txt)
[lorem_bad.txt.gz](https://github.com/user-attachments/files/17065892/lorem_bad.txt.gz)
[lorem_ok.txt](https://github.com/user-attachments/files/17065893/lorem_ok.txt)
[lorem_ok.txt.gz](https://github.com/user-attachments/files/17065895/lorem_ok.txt.gz)
### Minimal reproduction project (MRP)
[gzip_mrp.zip](https://github.com/user-attachments/files/17065967/gzip_mrp.zip)
| bug,topic:core,confirmed | low | Critical |
2,537,329,851 | godot | Parser error on lambda corner case | ### Tested versions
- Reproducible in `master` latest (0a4aedb36065f66fc7e99cb2e6de3e55242f9dfb)
### System information
Ubuntu 24
### Issue description
The following script:
```gdscript
extends Node
func foo():
pass
func _ready() -> void:
get_tree().create_timer(1.0).timeout.connect(foo) # works
get_tree().create_timer(1.0).timeout.connect(func(): foo()) # works
get_tree().create_timer(1.0).timeout.connect(
func():
foo()
) # works
(
get_tree()
. create_timer(1.0)
. timeout
. connect(foo)
) # works
(
get_tree()
. create_timer(1.0)
. timeout
. connect(func(): foo())
) # works
(
get_tree()
. create_timer(1.0)
. timeout
. connect(
func(): foo()
)
) # works
(
get_tree()
. create_timer(1.0)
. timeout
. connect(
func():
foo())
) # works !
(
get_tree()
. create_timer(1.0)
. timeout
. connect(
func():
foo()
)
) # doesn't work !
```
yields
```
SCRIPT ERROR: Parse Error: Unindent doesn't match the previous indentation level.
at: GDScript::reload (res://tests/potential-godot-bugs/lambda_corner_case.gd:47)
ERROR: Failed to load script "res://tests/potential-godot-bugs/lambda_corner_case.gd" with error "Parse error".
at: load (modules/gdscript/gdscript.cpp:3005)
```
while IMO it should parse correctly.
### Steps to reproduce
Run the above using `godot --headless --check-only -s <script_name>`
### Minimal reproduction project (MRP)
See above. | bug,topic:gdscript | low | Critical |
2,537,331,893 | rust | -plugin-opt linker flags require a leading dash when using GCC as a linker | Hi.
I was testing cross-language LTO with rustc_codegen_gcc (using GCC as the linker) and I needed to add a leading dash to the flags in [this code](https://github.com/rust-lang/rust/blob/d5a081981d166f6542a050d17035424062ef4aae/compiler/rustc_codegen_ssa/src/back/linker.rs#L395-L396) in order to make cross-language LTO working, so that it looks like this:
```rust
self.link_args(&[
&format!("-plugin-opt=-{opt_level}"),
&format!("-plugin-opt=-mcpu={}", self.target_cpu),
]);
```
Otherwise, GCC will error out with:
```
lto1: fatal error: open O3 failed: No such file or directory
compilation terminated.
lto-wrapper: fatal error: gcc returned 1 exit status
```
It would be nice to have the code working for both GCC and clang.
How would you support both?
Interestingly, `clang` seems to support a leading dash in the case of `-plugin-opt=-mcpu=x86-64`, but not `-plugin-opt=-O3`.
Thanks. | A-codegen,T-compiler,A-gcc | low | Critical |
2,537,380,658 | TypeScript | Proposal: Allow isolated declarations to infer results of constructor calls | ### 🔍 Search Terms
isolated declarations constructor generic infer
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Add a new flag or change the behavior of `--isolatedDeclarations` so that in the following code which is currently disallowed by isolated declarations:
```ts
export const foo = new Foo(a);
```
if the type of `foo` is not `Foo`, there is a typecheck error. In return, the type of `foo` will be emitted as `Foo`.
Similarly, if somehow in
```
export const foo: new Foo<A>(a);
```
`foo` is not a `Foo<A>`, there is a typecheck error.
### 📃 Motivating Example
Currently under isolated declarations, code must redundantly declare the type of a variable which is the result of a constructor call.
```ts
export const foo: Foo = new Foo(a);
```
This setting would also eliminate https://github.com/microsoft/TypeScript/issues/59768
### 💻 Use Cases
^ | Suggestion,Experimentation Needed | low | Critical |
2,537,402,903 | rust | "Reference to data owned by current function" for a function owning no data. | ### Code
```rs
fn f(a: &mut Vec<u64>) -> &[u64] {
let v = a;
v.push(1);
&v
}
```
### Current output
```
error[E0515]: cannot return reference to local variable `v`
--> src/main.rs:4:5
|
4 | &v
| ^^ returns a reference to data owned by the current function
```
### Desired output
```
Type error: expected `&[u64]`, found `& &mut Vec<u64>`.
Suggestion: change `&v` to `v`.
```
### Rationale and extra context
It seems like the autoderef is making the borrowck confused here, or something? I had trouble interpreting the message because the function owns no data (only copy-able refs).
> @Kyuuhachi I don't know the exact details, but derefing a `&'local &'self mut [u64]` gives a `&'local [u64]`, not a `&'self [u64]` [[1]](https://discord.com/channels/442252698964721669/443150878111694848/1286433267351818302)
> @Kyuuhachi Kind of a weird edge case here I think, because if instead chose to first relax the mutref into a `&'local &'self [u64]` it would be able to get a `&'self [u64]` from that [[2]](https://discord.com/channels/442252698964721669/443150878111694848/1286433653894680657)
Not sure how to help more?
### Other cases
Changing `&v` to `v` makes it compile without error.
### Rust Version
```
$ rustc --version --verbose
rustc 1.83.0-nightly (f79a912d9 2024-09-18)
binary: rustc
commit-hash: f79a912d9edc3ad4db910c0e93672ed5c65133fa
commit-date: 2024-09-18
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
### Anything else?
_No response_ | A-diagnostics,A-borrow-checker,T-compiler,D-confusing | low | Critical |
2,537,434,693 | deno | test --doc: allow import statements to be in a separate code block | I'm trying out the new `deno test --doc` feature in the Deno 2 release candidate. It works, but might be nicer if I didn't have to write out imports in each code block.
Version: 2.0.0-rc.4
## Example
````md
Here is an import that will be needed in the following examples:
```ts
import { assertEquals } from "@std/assert";
```
Here's an example of how to use assertEquals:
```ts
assertEquals("hello".length, 5);
```
Here's a second example:
```ts
assertEquals([1, 2, 3], [1, 2, 3]);
```
````
## What it does now:
```
Check file:///Users/skybrian/Projects/deno/repeat-test/docs/example.md$3-6.ts
Check file:///Users/skybrian/Projects/deno/repeat-test/docs/example.md$9-12.ts
Check file:///Users/skybrian/Projects/deno/repeat-test/docs/example.md$15-18.ts
error: TS2304 [ERROR]: Cannot find name 'assertEquals'.
assertEquals([
~~~~~~~~~~~~
at file:///Users/skybrian/Projects/deno/repeat-test/docs/example.md$15-18.ts:2:5
TS2304 [ERROR]: Cannot find name 'assertEquals'.
assertEquals("hello".length, 5);
~~~~~~~~~~~~
at file:///Users/skybrian/Projects/deno/repeat-test/docs/example.md$9-12.ts:2:5
Found 2 errors.
```
## What I'd like it to do:
Don't report any errors, because the imports were declared in a previous code block.
## How would it work?
I don't know what's best, but here are some alternatives:
* When there is a code block that *only* contains imports, automatically prepend it to each of the following code blocks.
* When there is a code block that has *no* imports, reuse the imports from the previous code block.
Or, perhaps both rules? | suggestion,testing | low | Critical |
2,537,461,960 | angular | Wrong content children order when using ngTemplateOutlet to render | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
Content children should follow dom order but rendering conditionally with ngTemplateOutlet in a different order than ng-template declaration seems to confuse angular. I would expect the order to be based on the resulting DOM and not on the ng-template declaration order.
There should be no difference between:
```html
<app-parent>
<app-child [order]="1"></app-child>
<app-child [order]="2"></app-child>
</app-parent>
```
and
```html
<app-parent> <!-- WRONG ORDER -->
<ng-container *ngTemplateOutlet="one"></ng-container>
<ng-container *ngTemplateOutlet="two"></ng-container>
<ng-template #two>
<app-child [order]="2"></app-child>
</ng-template>
<ng-template #one>
<app-child [order]="1"></app-child>
</ng-template>
</app-parent>
```
and
```html
<app-parent> <!-- NOT WORKING AT ALL -->
<ng-container *ngTemplateOutlet="one2"></ng-container>
<ng-container *ngTemplateOutlet="two2"></ng-container>
</app-parent>
<ng-template #two2>
<app-child [order]="2"></app-child>
</ng-template>
<ng-template #one2>
<app-child [order]="1"></app-child>
</ng-template>
```
I might be doing something wrong - but is there a solution for it? I know switching the templates order "fixes" it but I can't do that and that's not a real fix.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-5qy9kg?file=src%2Fmain.ts
### Please provide the exception or error you saw
```true
The third example is out of order while the fourth example isn't even working
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular 18
```
### Anything else?
_No response_ | area: core,core: queries | low | Critical |
2,537,475,667 | rust | Incorrect diagnostic "pattern requires `..` due to inaccessible fields" when matching in a macro | ### Code
```rust
struct A {
field1: String,
field2: String,
}
fn test(x: A) {
macro_rules! weird {
() => { let A { field1 } = x; }
}
weird!();
}
```
### Current output
```
error: pattern requires `..` due to inaccessible fields
--> a.rs:8:21
|
8 | () => { let A { field1 } = x; }
| ^^^^^^^^^^^^
9 | }
10 | weird!();
| -------- in this macro invocation
|
= note: this error originates in the macro `weird` (in Nightly builds, run with -Z macro-backtrace for more info)
help: ignore the inaccessible and unused fields
|
8 | () => { let A { field1, .. } = x; }
| ++++
```
### Desired output
```
error[E0027]: pattern does not mention field `field2`
--> a.rs:8:21
|
8 | () => { let A { field1 } = x; }
| ^^^^^^^^^^^^ missing field `field2`
|
```
### Rationale and extra context
It seems this error message gets generated whenever an incomplete pattern shows up within a macro invocation, whether or not the fields are actually inaccessible.
### Other cases
_No response_
### Rust Version
rustc 1.83.0-nightly (f79a912d9 2024-09-18)
binary: rustc
commit-hash: f79a912d9edc3ad4db910c0e93672ed5c65133fa
commit-date: 2024-09-18
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
### Anything else?
_No response_ | A-diagnostics,A-macros,T-compiler,D-confusing,A-hygiene | low | Critical |
2,537,483,221 | vscode | Font Height is different than it should be | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.1
- OS Version: Windows 10 19045.4894
Steps to Reproduce:
I wanted to apply the Jetbrains Fleet editor appearance into VSCode, so i started by:
1. Noting the default font settings from Jetbrains Fleet:
**Font:** Jetbrains Mono
**Font Size:** 13.0
**Line height:** 1.7
2. Converting it to settings.json for VSCode:
```
"editor.fontFamily": "Jetbrains Mono",
"editor.lineHeight": 1.7,
"editor.fontSize": 13,
```
But then the font height (not the line height) was lower compared to Jetbrains Fleet editor. Like its compressed. Heres a comparison:
Left is VSCode, right is Fleet.

| font-rendering,under-discussion,confirmation-pending | low | Critical |
2,537,511,064 | yt-dlp | [ie/neteasemusic] Bypassing Geo with `X-Real-IP` header | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
China
### Example URLs
http://music.163.com/#/song?id=478393
### Provide a description that is worded well enough to be understood
Currently, the extractor requires a proxy to bypass geo-restriction. Despite that `X-Forwarded-For` does not work and the extractor therefore has `_GEO_BYPASS = False`, the restriction can be bypassed using a `X-Real-IP: 118.88.88.88` header in the API request, without need for an actual proxy.
https://github.com/yt-dlp/yt-dlp/blob/4a9bc8c3630378bc29f0266126b503f6190c0430/yt_dlp/extractor/neteasemusic.py#L79-L83
```diff
def _call_player_api(self, song_id, level):
return self._download_eapi_json(
'/song/enhance/player/url/v1', song_id,
{'ids': f'[{song_id}]', 'level': level, 'encodeType': 'flac'},
+ headers={'X-Real-IP': '118.88.88.88'},
note=f'Downloading song URL info: level {level}')
```
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[netease:song] Extracting URL: http://music.163.com/#/song?id=478393
[netease:song] 478393: Downloading song info
[netease:song] 478393: Downloading song URL info: level standard
ERROR: [netease:song] 478393: No media links found; possibly due to geo restriction
This video is available in China.
You might want to use a VPN or a proxy server (with --proxy) to workaround.
Traceback (most recent call last):
File "F:\cbasalt-github\yt-dlp\yt_dlp\YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
File "F:\cbasalt-github\yt-dlp\yt_dlp\YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
File "F:\cbasalt-github\yt-dlp\yt_dlp\extractor\common.py", line 740, in extract
ie_result = self._real_extract(url)
File "F:\cbasalt-github\yt-dlp\yt_dlp\extractor\neteasemusic.py", line 270, in _real_extract
formats = self._extract_formats(info)
File "F:\cbasalt-github\yt-dlp\yt_dlp\extractor\neteasemusic.py", line 114, in _extract_formats
self.raise_geo_restricted(
File "F:\cbasalt-github\yt-dlp\yt_dlp\extractor\common.py", line 1254, in raise_geo_restricted
raise GeoRestrictedError(msg, countries=countries)
yt_dlp.utils.GeoRestrictedError: [netease:song] 478393: No media links found; possibly due to geo restriction
```
| site-enhancement,triage | low | Critical |
2,537,513,789 | deno | Full management of versions à la `nvm` with offline/cache capabilities and wildcards | # Observation
Deno added cool ways to upgrade like installing without `--version` or `deno upgrade [rc|lts]`.
But still it could be better. Versions are too precise, and you can't just ask for `1.46`, neither ask for `1.46.*` nor `~1.46`.
Also, it would be good to have a full listing of the versions available and list the RCs, and LTSs in a `nvm list-remote` fashion.
Cherry on top would be offline resolution of the version asked if the list of tag is fresh, let's say a day. Similar to `deno run -r, --reload`, a reload option would refresh the cache.
# Subcommands suggestion
```bash
deno ls-remote, list-remote [lts|*wildcard*] # What is available online
deno list [lts|*wildcard*] # offline list. A `-r, --remote` could make the `ls-remote` useless
deno upgrade [lts] # Already exists, but needs *wildcards*
deno prune [one-version|*wildcard*] [-a, --all-unused, -d --dangling] [-o, --outdated] # outdated will remove all past versions, so if 1.46.3 is the current, 1.46.{0,1,2} will be deleted
```
See past issues:
- https://github.com/denoland/deno/issues/25035
- https://github.com/denoland/deno/issues/18440
- https://github.com/denoland/deno/issues/18406 | cli,suggestion | low | Minor |
2,537,540,854 | PowerToys | Unable to open Settings or Quick Access | ### Microsoft PowerToys version
0.84.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
General, Settings
### Steps to reproduce
Single or double click system tray icon OR right click system tray icon and select Quick Access or Settings
[PowerToysReport_2024-09-19-16-47-59.zip](https://github.com/user-attachments/files/17067989/PowerToysReport_2024-09-19-16-47-59.zip)
I have:
- Uninstalled PowerToys
- Tried different versions
- Tried different sources
- Tried local vs. machine-wide installs
I have been using PowerToys on this machine (work issued) since approximately v0.59. At some point this issue began and has persisted. At some point, I noticed Dotnet 8 was a dependency but was not being installed. I have manually installed Dotnet 8.
I am able to open certain modules by utilizing their hotkeys.
### ✔️ Expected Behavior
Quick Access to open or Settings menu to open
### ❌ Actual Behavior
Nothing
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,537,594,998 | rust | `Range<usize>::next` should fully MIR-inline | If you compile this to optimized MIR:
```rust
// @compile-flags: -O -C debuginfo=0 --emit=mir -Z inline-mir-threshold=9999 -Z inline-mir-hint-threshold=9999
use std::ops::Range;
#[no_mangle]
pub fn demo(num: &mut Range<usize>) -> Option<usize> {
num.next()
}
```
<https://rust.godbolt.org/z/zsh6b6Y8n>
You'll see that it still contains a call to `forward_unchecked`:
```rust
bb1: {
_3 = copy ((*_1).0: usize);
StorageLive(_4);
_4 = <usize as Step>::forward_unchecked(copy _3, const 1_usize) -> [return: bb2, unwind continue];
}
```
That's pretty unfortunate, because `forward_unchecked(a, 1)` is just `AddUnchecked(a, 1)`, a single MIR operator.
| T-compiler,A-MIR,A-mir-opt-inlining,C-optimization | low | Critical |
2,537,606,949 | ollama | Fetch model by hash | An endpoint or a CLI command to fetch an ollama "model" by hash (e.g. as found in `manifests/registry.ollama.ai/library/llama3.1/latest`) instead of by a human-assigned tag would help e.g. with offline usage of ollama (downloading models upfront in a sandbox and exposed to ollama in a read-only form)
(not a duplicate of https://github.com/ollama/ollama/issues/2003) | feature request | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.