id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,523,117,193 | material-ui | [docs] Outdated version entry should be cleaned up and removed from version table | ### Related page
https://mui.com/versions/
### Kind of issue
Other
### Issue description
We should not have a random minor version 5.0.6 (that is not up to date with latest v5) showing up in the version table.
This will give reader a false feeling that the latest version of v5 is v5.0.6
<img width="809" alt="Screenshot 2024-09-12 at 11 49 08 AM" src="https://github.com/user-attachments/assets/edde43a7-0cba-49b9-a855-e5d0c567d37d">
### Context
The branch named v5.0.6 in https://github.com/mui/material-ui-docs/branches can probably be removed.
**Search keywords**: released versions | docs,support: docs-feedback | low | Minor |
2,523,118,082 | rust | ICE `Option::unwrap()` in `rmeta/def_path_hash_map.rs` | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
I'm unable to share code that triggered this ICE, since the codebase is propriety. Hopefully the report is still helpful.
I'm not able to reproduce this ICE after running `cargo clean`.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: aarch64-apple-darwin
release: 1.81.0
LLVM version: 18.1.7
```
### Error output
```
Checking application v0.1.0 (/Users/ethan.brierley/Code/ledger/main/src/entrypoints/application)
thread 'rustc' panicked at compiler/rustc_metadata/src/rmeta/def_path_hash_map.rs:23:54:
called `Option::unwrap()` on a `None` value
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on aarch64-apple-darwin
note: compiler flags: --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation `{coroutine witness@<db::event_store::DbEventStore as domain::providers::event_store::LockableEventStore>::locked_aggregate_handle_by_wallet_id::{closure#0}}: core::marker::Send`
#1 [typeck] type-checking `startup::core_banking_consumers::core_banking_commands_consumers`
#2 [analysis] running analysis passes on this crate
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 TraitSelect(756c14c210d65bf9-90b591045b61c8d6)
#1 TraitSelect(12a95b99ae593ae2-2b6ef6be077df5e1)
#2 TraitSelect(e41661c9028f2833-242c893ffb92d34e)
#3 TraitSelect(dbbfee8674ecfee9-2d6f9dcc9bd34015)
#4 evaluate_obligation(59acda829add6985-25009eb1f1feac53)
end of try_mark_green dep node stack
error: could not compile `application` (lib)
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
stack backtrace:
0: _rust_begin_unwind
1: core::panicking::panic_fmt
2: core::panicking::panic
3: core::option::unwrap_failed
4: <rustc_metadata::rmeta::decoder::cstore_impl::provide_cstore_hooks::{closure#0} as core::ops::function::FnOnce<(rustc_middle::query::plumbing::TyCtxtAt
5: <rustc_middle::ty::context::TyCtxt>::def_path_hash_to_def_id
6: <rustc_query_impl::plumbing::query_callback<rustc_query_impl::query_impl::type_of::QueryType>::{closure#0} as core::ops::function::FnOnce<(rustc_middle
7: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtx
8: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtx
9: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtx
10: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtx
11: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtx
12: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl::plumbing::QueryCtxt>
13: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::DefaultCache<rustc_type_ir:
14: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation
15: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation_no_overflow
16: <rustc_trait_selection::traits::fulfill::FulfillProcessor>::process_trait_obligation
17: <rustc_trait_selection::traits::fulfill::FulfillProcessor as rustc_data_structures::obligation_forest::ObligationProcessor>::process_obligation
18: <rustc_data_structures::obligation_forest::ObligationForest<rustc_trait_selection::traits::fulfill::PendingPredicateObligation>>::process_obligations::
19: <rustc_trait_selection::traits::fulfill::FulfillmentContext<rustc_trait_selection::traits::FulfillmentError> as rustc_infer::traits::engine::TraitEngin
20: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_argument_types
21: <rustc_hir_typeck::fn_ctxt::FnCtxt>::confirm_builtin_call
22: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_kind
23: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
24: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_kind
25: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
26: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_kind
27: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
28: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_argument_types
29: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_method_argument_types
30: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_kind
31: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
32: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_block_with_expected
33: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
34: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_return_expr
35: rustc_hir_typeck::check::check_fn
36: rustc_hir_typeck::typeck
[... omitted 2 frames ...]
37: <rustc_data_structures::sync::parallel::ParallelGuard>::run::<(), rustc_data_structures::sync::parallel::disabled::par_for_each_in<&[rustc_span::def_id
38: rustc_hir_analysis::check_crate
39: rustc_interface::passes::analysis
[... omitted 2 frames ...]
40: <rustc_interface::queries::QueryResult<&rustc_middle::ty::context::GlobalCtxt>>::enter::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}::{closure#1}::{closure#5}>
41: <rustc_interface::interface::Compiler>::enter::<rustc_driver_impl::run_compiler::{closure#0}::{closure#1}, core::result::Result<core::option::Option<rustc_interface::queries::Linker>, rustc_span::ErrorGuaranteed>>
42: <scoped_tls::ScopedKey<rustc_span::SessionGlobals>>::set::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
43: rustc_span::create_session_globals_then::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}>
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
</p>
</details> | I-ICE,T-compiler,A-incr-comp,C-bug,S-needs-repro | low | Critical |
2,523,123,720 | pytorch | ☂️ 150+ MacOS tests were marked flaky recently | ### 🐛 Describe the bug
Query to see all the disabled tests https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+DISABLED+test_comprehensive+created%3A%3E2024-08-31
Test-infra PR that prevents bot from disabling more of those https://github.com/pytorch/test-infra/pull/5664
All test failures exhibit the same pattern. One of the tests will fail with
```
OMP: Error #179: Function pthread_key_create failed:
2024-09-12T17:29:55.9079710Z OMP: System error #35: Resource temporarily unavailable
```
Which looks like a semaphore leak. All subsequent tests will fail, but succeed on re-run, which as far as I understand is done in context of new process
Flaky detection bot was not designed to mass disable the test, we should try to rootcause what caused flakiness ASAP:
- https://github.com/pytorch/pytorch/issues/135856
- https://github.com/pytorch/pytorch/issues/135855
- https://github.com/pytorch/pytorch/issues/135854
- https://github.com/pytorch/pytorch/issues/135853
- https://github.com/pytorch/pytorch/issues/135852
- https://github.com/pytorch/pytorch/issues/135851
- https://github.com/pytorch/pytorch/issues/135850
- https://github.com/pytorch/pytorch/issues/135849
- https://github.com/pytorch/pytorch/issues/135848
- https://github.com/pytorch/pytorch/issues/135847
- https://github.com/pytorch/pytorch/issues/135846
- https://github.com/pytorch/pytorch/issues/135845
- https://github.com/pytorch/pytorch/issues/135844
- https://github.com/pytorch/pytorch/issues/135843
- https://github.com/pytorch/pytorch/issues/135842
- https://github.com/pytorch/pytorch/issues/135841
- https://github.com/pytorch/pytorch/issues/135840
- https://github.com/pytorch/pytorch/issues/135839
- https://github.com/pytorch/pytorch/issues/135838
- https://github.com/pytorch/pytorch/issues/135815
- https://github.com/pytorch/pytorch/issues/135814
- https://github.com/pytorch/pytorch/issues/135813
- https://github.com/pytorch/pytorch/issues/135812
- https://github.com/pytorch/pytorch/issues/135811
- https://github.com/pytorch/pytorch/issues/135810
- https://github.com/pytorch/pytorch/issues/135808
- https://github.com/pytorch/pytorch/issues/135807
- https://github.com/pytorch/pytorch/issues/135806
- https://github.com/pytorch/pytorch/issues/135805
- https://github.com/pytorch/pytorch/issues/135804
- https://github.com/pytorch/pytorch/issues/135803
- https://github.com/pytorch/pytorch/issues/135802
- https://github.com/pytorch/pytorch/issues/135801
- https://github.com/pytorch/pytorch/issues/135799
- https://github.com/pytorch/pytorch/issues/135798
- https://github.com/pytorch/pytorch/issues/135797
- https://github.com/pytorch/pytorch/issues/135784
- https://github.com/pytorch/pytorch/issues/135782
- https://github.com/pytorch/pytorch/issues/135754
- https://github.com/pytorch/pytorch/issues/135753
- https://github.com/pytorch/pytorch/issues/135752
- https://github.com/pytorch/pytorch/issues/135751
- https://github.com/pytorch/pytorch/issues/135750
- https://github.com/pytorch/pytorch/issues/135749
- https://github.com/pytorch/pytorch/issues/135748
- https://github.com/pytorch/pytorch/issues/135747
- https://github.com/pytorch/pytorch/issues/135746
- https://github.com/pytorch/pytorch/issues/135745
- https://github.com/pytorch/pytorch/issues/135744
16 more issues fresh of the press:
- https://github.com/pytorch/pytorch/issues/135895
- https://github.com/pytorch/pytorch/issues/135896
- https://github.com/pytorch/pytorch/issues/135897
- https://github.com/pytorch/pytorch/issues/135898
- https://github.com/pytorch/pytorch/issues/135899
- https://github.com/pytorch/pytorch/issues/135900
- https://github.com/pytorch/pytorch/issues/135901
- https://github.com/pytorch/pytorch/issues/135902
- https://github.com/pytorch/pytorch/issues/135903
- https://github.com/pytorch/pytorch/issues/135904
- https://github.com/pytorch/pytorch/issues/135905
- https://github.com/pytorch/pytorch/issues/135906
- https://github.com/pytorch/pytorch/issues/135907
- https://github.com/pytorch/pytorch/issues/135908
- https://github.com/pytorch/pytorch/issues/135909
- https://github.com/pytorch/pytorch/issues/135910
And 16 more:
- https://github.com/pytorch/pytorch/issues/135937
- https://github.com/pytorch/pytorch/issues/135938
- https://github.com/pytorch/pytorch/issues/135939
- https://github.com/pytorch/pytorch/issues/135940
- https://github.com/pytorch/pytorch/issues/135941
- https://github.com/pytorch/pytorch/issues/135942
- https://github.com/pytorch/pytorch/issues/135943
- https://github.com/pytorch/pytorch/issues/135944
- https://github.com/pytorch/pytorch/issues/135945
- https://github.com/pytorch/pytorch/issues/135946
- https://github.com/pytorch/pytorch/issues/135947
- https://github.com/pytorch/pytorch/issues/135948
- https://github.com/pytorch/pytorch/issues/135949
- https://github.com/pytorch/pytorch/issues/135950
- https://github.com/pytorch/pytorch/issues/135951
- https://github.com/pytorch/pytorch/issues/135952
Another 16:
- https://github.com/pytorch/pytorch/issues/136014
- https://github.com/pytorch/pytorch/issues/136015
- https://github.com/pytorch/pytorch/issues/136016
- https://github.com/pytorch/pytorch/issues/136017
- https://github.com/pytorch/pytorch/issues/136018
- https://github.com/pytorch/pytorch/issues/136019
- https://github.com/pytorch/pytorch/issues/136020
- https://github.com/pytorch/pytorch/issues/136021
- https://github.com/pytorch/pytorch/issues/136022
- https://github.com/pytorch/pytorch/issues/136023
- https://github.com/pytorch/pytorch/issues/136024
- https://github.com/pytorch/pytorch/issues/136025
- https://github.com/pytorch/pytorch/issues/136026
- https://github.com/pytorch/pytorch/issues/136027
- https://github.com/pytorch/pytorch/issues/136028
- https://github.com/pytorch/pytorch/issues/136029
- https://github.com/pytorch/pytorch/issues/136030
### Versions
nightly
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD | high priority,triaged,module: macos,module: infra | low | Critical |
2,523,130,029 | PowerToys | PowerToys Run stops working in browsers after certain period of time | ### Microsoft PowerToys version
0.84.1
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
After using any browser after long period of time, PTRun just simply stops working. When restoring other apps from taskbar, it works for every other app, except browser that I've been using for a long time. After I restart tha specific browser, it starts working again.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
[Uploading PowerToysReport_2024-09-12-20-53-37.zip…]()
| Issue-Bug,Needs-Triage | low | Minor |
2,523,147,725 | flutter | [Impeller] Devise DisplayList ops that express more fine grained rendering intent. | Today, display-list ops are fairly high level and there is a 1-1 relationship between the operations that can be performed on a display list builder and the opcodes supported by display lists.
Having more fine grained "mid-level" ops might make it optimizations easier to express. | engine,P3,e: impeller,team-engine,triaged-engine | low | Major |
2,523,152,207 | godot | Renaming a scene file to convert between binary and text format throws errors and makes scene unloadable | ### Tested versions
Reproducible in `4.2`, `4.3`, `4.4`
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 32.0.15.6109) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 Threads)
### Issue description
When you rename a scene file and change its extension in such a way that it changes implied format (from binary to text or vice versa), it doesn't actually change the contents of the file, and throws various errors (a little unpredictable depending if the file is open or which direction you're going). The resulting scene file is not openable.




The expected behaviour is that the scene should automatically re-save itself in the new intended format based on the extension, and should be openable after rename. It is possible to work around this issue with the following steps:
- Make sure the scene is open, and is dirty (`*` in the title bar)
- Rename the file to the new extension
- Choose `Save As` from the open scene, and _overwrite_ the newly renamed scene
The above should just happen automatically with the rename.
### Steps to reproduce
- Rename any `.tscn` scene to `.scn`, OR:
- Rename any `.scn` scene to `.tscn`
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Critical |
2,523,196,812 | terminal | Command line "-F" (fullscreen) should work in an already running instance | WindowsTerminal 1.21.2408.23001
In an already-running WT, this command splits the current pane and focuses the old one.
`wt sp -p TCC33 -V ; move-focus left`
This command does the same thing, not making it fullscreen.
`wt -F sp -p TCC33 -V ; move-focus left`
Is that as expected? | Issue-Feature,Product-Terminal,Area-Remoting | low | Minor |
2,523,206,616 | deno | `process.stdout` and `stderr` backed by a tty should be instances of `node:tty` `WriteStream` | There's code out there that checks whether stdout is a tty by doing an instanceof check, e.g. in angular: https://github.com/angular/angular-cli/blob/d66aaa3ca458e05b535bec7c1dcb98b0e9c5202e/packages/angular/cli/src/utilities/color.ts#L14-L16
| bug,node compat | low | Minor |
2,523,231,410 | ant-design | DatePicker with multiple cannot be closed when using onOpenChange and open props | ### Reproduction link
[](https://stackblitz.com/edit/react-ezkqim?file=demo.tsx)
### Steps to reproduce
1. Click DatePicker input to open dropdown
2. Select a date
3. Click outside the DatePicker
### What is expected?
I would expect the dropdown to close and remain closed.
### What is actually happening?
The dropdown closes for a split second and then re-opens.
| Environment | Info |
| --- | --- |
| antd | 5.20.6 |
| React | 18.3.1 |
| System | macOS Sonoma Version 14.4.1 (23E224) |
| Browser | Google Chrome 128.0.6613.120 |
---
We rely on setting DatePicker's `open` prop for programmatically opening / closing the dropdown, so we've put `open` on state and use `onOpenChange` to set state when AntD internal APIs dictate `open` has changed.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,Inactive | low | Minor |
2,523,242,523 | kubernetes | API Server Keeps Using Open TCP Connections to Terminating Admission Controller Pods | ### What happened?
I am encountering an issue where the Kubernetes API server continues to use open TCP connections to an admission controller pod after it has been marked as unready and removed from the service endpoints.
### My Setup
- I have a `ValidatingWebhookConfiguration` pointing to my service `admission-controller`, which uses an `ignore` failure policy.
- The `admission-controller` deployment consists of multiple pods, each running an HTTP server. The service selector targets all pods in the deployment, leading to multiple endpoints.
- Upon pod termination, the goal is to perform a graceful shutdown by:
- Marking the pod as unready.
- Waiting until the pod is removed from the service endpoints.
- Handling inflight messages before shutting down the pod.
### The Issue
Once a pod is marked as unready, it is correctly removed from the service endpoints and does not receive new connections—this is expected behavior.
However, the pod continues to receive HTTP requests over old open TCP connections.
This leads to two major problems:
1. I cannot close the TCP connections on the server side because it may result in a potential loss of requests from the API server.
2. There is no defined deadline for when the connection gets closed on the client side, leaving it open indefinitely unless manually terminated.
Additionally, I discovered that the API server does not retry requests (and opens a new TCP connection) if it encounters a closed connection.
### Stress Test Results
I performed a test to further diagnose the issue:
1. Deployed one `admission-controller` pod in the deployment (intended to block all requests).
2. Ran the following command:
```kubectl rollout restart deployment admission-controller```
- A new pod was created, and the old pod began terminating.
- The old pod closed the server as soon as it became unready.
4. Sent 1000 requests to create a pod to simulate stress.
#### Expected Result
No pods should be created, as the `admission-controller` should block all the requests.
#### Actual Result
Some pods were created during each run. There appears to be a small window of time where the cluster is unprotected, and the API server does not receive a response because of the failurePolicy (ignore). As a result, it proceeds with pod creation.
### Impact
This is particularly concerning because during that window where old TCP connections are still in use, the API server can bypass the admission controller, leading to potential security risks.
### What I’ve Tried
I attempted to gracefully handle the shutdown by waiting for connections to close naturally, but I am unable to define a clear deadline or force the API server to retry on connection closure.
Any assistance or guidance on how to address this issue would be greatly appreciated.
### What did you expect to happen?
1. The TCP connections used by the API server should have a reasonable time-to-live, allowing the pod to wait long enough to ensure all existing TCP connections are properly closed upon termination.
2. I expect the API server to have a built-in retry mechanism to handle such failures.
### How can we reproduce it (as minimally and precisely as possible)?
As mentioned above using the described setup and stressed tests.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
$ kubectl version
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.3-gke.1639000
```
Tested on both Minikube and GKE.</details>
### Cloud provider
<details>
Tested on both Minikube and GKE.
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/network,sig/api-machinery,triage/accepted | low | Critical |
2,523,267,467 | go | internal/trace: TestTraceStress failing on openbsd-386-72 builders | The openbsd-386-72 builders are consistently failing on https://build.golang.org/. There seems to be a theme of memory outs and time outs TestTraceStress. Here are a few samples:
Memory outs on internal/trace.TestTraceStress :
https://build.golang.org/log/b4d1346be8bd45d05571d33dea70ed26e46fe405
https://build.golang.org/log/f3aca232d11e5cedc57bb3e76f698d271c5d8d8d
https://build.golang.org/log/0de00ca53f8cd5b89dbfd1f8123c5718738cd7db
```
fatal error: runtime: out of memory
...
goroutine 112 gp=0x64c07328 m=nil [chan receive, 2 minutes]:
runtime.gopark(0x8280d08, 0x678831f4, 0xe, 0x7, 0x2)
/tmp/workdir/go/src/runtime/proc.go:435 +0xfa fp=0x64c35e60 sp=0x64c35e4c pc=0x80b797a
runtime.chanrecv(0x678831c0, 0x64c35ed3, 0x1)
/tmp/workdir/go/src/runtime/chan.go:639 +0x3bd fp=0x64c35e9c sp=0x64c35e60 pc=0x804db4d
runtime.chanrecv1(0x678831c0, 0x64c35ed3)
/tmp/workdir/go/src/runtime/chan.go:489 +0x1c fp=0x64c35eb0 sp=0x64c35e9c pc=0x804d75c
testing.(*T).Run(0x66f0e128, {0x826b913, 0x7}, 0x684de550)
/tmp/workdir/go/src/testing/testing.go:1831 +0x466 fp=0x64c35f34 sp=0x64c35eb0 pc=0x813f6b6
internal/trace_test.testTraceProg(0x66f0e128, {0x826c1c3, 0x9}, 0x8280ca4)
/tmp/workdir/go/src/internal/trace/trace_test.go:651 +0x22b fp=0x64c35f70 sp=0x64c35f34 pc=0x82190ab
internal/trace_test.TestTraceStress(0x66f0e128)
/tmp/workdir/go/src/internal/trace/trace_test.go:510 +0x37 fp=0x64c35f84 sp=0x64c35f70 pc=0x8218757
testing.tRunner(0x66f0e128, 0x8280b64)
/tmp/workdir/go/src/testing/testing.go:1764 +0x113 fp=0x64c35fe4 sp=0x64c35f84 pc=0x813e773
testing.(*T).Run.gowrap1()
/tmp/workdir/go/src/testing/testing.go:1823 +0x28 fp=0x64c35ff0 sp=0x64c35fe4 pc=0x813f7f8
runtime.goexit({})
/tmp/workdir/go/src/runtime/asm_386.s:1393 +0x1 fp=0x64c35ff4 sp=0x64c35ff0 pc=0x80bcfe1
created by testing.(*T).Run in goroutine 1
/tmp/workdir/go/src/testing/testing.go:1823 +0x447
FAIL internal/trace 154.958s
```
Time out (?) on internal/trace.TestTraceStress :
https://build.golang.org/log/3e7cb75f09a4ec9d1d32e047b7b951e2e75d7e0b
```
FAIL: TestTraceStress (128.27s)
--- FAIL: TestTraceStress/Default (128.27s)
exec.go:213: test timed out while running command: /tmp/workdir/go/bin/go run testdata/testprog/stress.go
trace_test.go:616: exit status 1
--- FAIL: TestTraceStressStartStop (9.68s)
--- FAIL: TestTraceStressStartStop/Default (9.68s)
exec.go:213: test timed out while running command: /tmp/workdir/go/bin/go run testdata/testprog/stress-start-stop.go
trace_test.go:616: signal: killed
--- FAIL: TestTraceManyStartStop (0.38s)
--- FAIL: TestTraceManyStartStop/Default (0.38s)
exec.go:213: test timed out while running command: /tmp/workdir/go/bin/go run testdata/testprog/many-start-stop.go
trace_test.go:614: stderr: SIGQUIT: quit
PC=0x26428f5b m=1 sigcode=0
goroutine 0 gp=0x56808368 m=1 mp=0x56842008 [idle]:
runtime.usleep(0x14)
runtime/sys_openbsd2.go:140 +0x19 fp=0x45a5aa84 sp=0x45a5aa74 pc=0x809ff99
runtime.sysmon()
runtime/proc.go:6075 +0xa6 fp=0x45a5aae0 sp=0x45a5aa84 pc=0x808e856
runtime.mstart1()
runtime/proc.go:1845 +0x72 fp=0x45a5aaf0 sp=0x45a5aae0 pc=0x8085ba2
runtime.mstart0()
runtime/proc.go:1802 +0x4b fp=0x45a5aafc sp=0x45a5aaf0 pc=0x8085b1b
runtime.mstart()
runtime/asm_386.s:275 +0x5 fp=0x45a5ab00 sp=0x45a5aafc pc=0x80bdad5
goroutine 1 gp=0x56808128 m=nil [semacquire]:
runtime.gopark(0x8754c50, 0x8b76b20, 0x12, 0x5, 0x4)
runtime/proc.go:435 +0xfa fp=0x568cdae0 sp=0x568cdacc pc=0x80b94fa
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.semacquire1(0x56a823e8, 0x0, 0x1, 0x0, 0x12)
runtime/sema.go:178 +0x27b fp=0x568cdb10 sp=0x568cdae0 pc=0x8095adb
sync.runtime_Semacquire(0x56a823e8)
runtime/sema.go:71 +0x35 fp=0x568cdb28 sp=0x568cdb10 pc=0x80ba9d5
sync.(*WaitGroup).Wait(0x56a823e0)
sync/waitgroup.go:118 +0x5f fp=0x568cdb44 sp=0x568cdb28 pc=0x80cc01f
cmd/go/internal/work.(*Builder).Do(0x56874230, {0x880c2a0, 0x8b7c060}, 0x56af3c88)
cmd/go/internal/work/exec.go:231 +0x419 fp=0x568cdbdc sp=0x568cdb44 pc=0x85845c9
cmd/go/internal/run.runRun({0x880c2a0, 0x8b7c060}, 0x8b66960, {0x5681c028, 0x1, 0x1})
cmd/go/internal/run/run.go:174 +0x7ff fp=0x568cdc90 sp=0x568cdbdc pc=0x85d531f
main.invoke(0x8b66960, {0x5681c020, 0x2, 0x2})
cmd/go/main.go:339 +0x8b7 fp=0x568cde64 sp=0x568cdc90 pc=0x861f2d7
main.main()
cmd/go/main.go:218 +0x1031 fp=0x568cdfac sp=0x568cde64 pc=0x861e371
runtime.main()
runtime/proc.go:283 +0x288 fp=0x568cdff0 sp=0x568cdfac pc=0x8082a88
runtime.goexit({})
runtime/asm_386.s:1393 +0x1 fp=0x568cdff4 sp=0x568cdff0 pc=0x80beec1
goroutine 2 gp=0x56808488 m=nil [force gc (idle)]:
runtime.gopark(0x8754c50, 0x8b6d9a8, 0x11, 0xa, 0x1)
runtime/proc.go:435 +0xfa fp=0x5683efdc sp=0x5683efc8 pc=0x80b94fa
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.forcegchelper()
runtime/proc.go:348 +0xc7 fp=0x5683eff0 sp=0x5683efdc pc=0x8082de7
runtime.goexit({})
runtime/asm_386.s:1393 +0x1 fp=0x5683eff4 sp=0x5683eff0 pc=0x80beec1
created by runtime.init.6 in goroutine 1
runtime/proc.go:336 +0x1d
goroutine 3 gp=0x568085a8 m=nil [runnable]:
runtime.goschedIfBusy()
runtime/proc.go:387 +0x36 fp=0x5683f7cc sp=0x5683f7c0 pc=0x8082e96
runtime.bgsweep(0x56824040)
runtime/mgcsweep.go:301 +0x141 fp=0x5683f7e8 sp=0x5683f7cc pc=0x806ef31
runtime.gcenable.gowrap1()
runtime/mgc.go:203 +0x21 fp=0x5683f7f0 sp=0x5683f7e8 pc=0x8060951
runtime.goexit({})
runtime/asm_386.s:1393 +0x1 fp=0x5683f7f4 sp=0x5683f7f0 pc=0x80beec1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:203 +0x71
goroutine 4 gp=0x568086c8 m=nil [runnable]:
runtime.gopark(0x8754c50, 0x8b6e940, 0x13, 0xe, 0x2)
runtime/proc.go:435 +0xfa fp=0x5683ff68 sp=0x5683ff54 pc=0x80b94fa
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.(*scavengerState).sleep(0x8b6e940, 0x410388c800000000)
runtime/mgcscavenge.go:504 +0x149 fp=0x5683ffcc sp=0x5683ff68 pc=0x806c729
runtime.bgscavenge(0x56824040)
runtime/mgcscavenge.go:662 +0xa0 fp=0x5683ffe8 sp=0x5683ffcc pc=0x806cbb0
runtime.gcenable.gowrap2()
runtime/mgc.go:204 +0x21 fp=0x5683fff0 sp=0x5683ffe8 pc=0x8060911
runtime.goexit({})
runtime/asm_386.s:1393 +0x1 fp=0x5683fff4 sp=0x5683fff0 pc=0x80beec1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:204 +0xb1
goroutine 5 gp=0x56808c68 m=nil [finalizer wait]:
runtime.gopark(0x8754b04, 0x8b7c130, 0x10, 0xa, 0x1)
runtime/proc.go:435 +0xfa fp=0x5683e798 sp=0x5683e784 pc=0x80b94fa
runtime.runfinq()
runtime/mfinal.go:193 +0xf0 fp=0x5683e7f0 sp=0x5683e798 pc=0x805fab0
runtime.goexit({})
runtime/asm_386.s:1393 +0x1 fp=0x5683e7f4 sp=0x5683e7f0 pc=0x80beec1
created by runtime.createfing in goroutine 1
runtime/mfinal.go:163 +0x5a
goroutine 6 gp=0x56808fc8 m=nil [runnable]:
runtime.gopark(0x8754ae4, 0x56824274, 0xe, 0x7, 0x2)
runtime/proc.go:435 +0xfa fp=0x56840790 sp=0x5684077c pc=0x80b94fa
runtime.chanrecv(0x56824240, 0x0, 0x1)
runtime/chan.go:639 +0x3bd fp=0x568407cc sp=0x56840790 pc=0x8050c0d
runtime.chanrecv1(0x56824240, 0x0)
runtime/chan.go:489 +0x1c fp=0x568407e0 sp=0x568407cc pc=0x805081c
runtime.unique_runtime_registerUniqueMapCleanup.func1(...)
runtime/mgc.go:1731
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
runtime/mgc.go:1734 +0x34 fp=0x568407f0 sp=0x568407e0 pc=0x80640e4
runtime.goexit({})
runtime/asm_386.s:1393 +0x1 fp=0x568407f4 sp=0x568407f0 pc=0x80beec1
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
runtime/mgc.go:1729 +0x96
goroutine 8 gp=0x56809208 m=nil [runnable]:
syscall.syscall(0x80d9450, 0x3, 0x56d68000, 0x8000)
runtime/sys_openbsd3.go:28 +0x20 fp=0x568cb3bc sp=0x568cb3ac pc=0x80bc800
syscall.read(0x3, {0x56d68000, 0x8000, 0x8000})
syscall/zsyscall_openbsd_386.go:1192 +0x49 fp=0x568cb3e4 sp=0x568cb3bc pc=0x80d7a59
syscall.Read(...)
syscall/syscall_unix.go:183
internal/poll.ignoringEINTRIO(...)
internal/poll/fd_unix.go:745
internal/poll.(*FD).Read(0x56afdb80, {0x56d68000, 0x8000, 0x8000})
internal/poll/fd_unix.go:161 +0x229 fp=0x568cb42c sp=0x568cb3e4 pc=0x81344e9
os.(*File).read(...)
os/file_posix.go:29
os.(*File).Read(0x56afefc0, {0x56d68000, 0x8000, 0x8000})
os/file.go:124 +0x6a fp=0x568cb450 sp=0x568cb42c pc=0x813cf6a
io.copyBuffer({0x3a095550, 0x56826a10}, {0x8808f40, 0x56afefc8}, {0x0, 0x0, 0x0})
io/io.go:429 +0x1e0 fp=0x568cb49c sp=0x568cb450 pc=0x812c090
io.Copy(...)
io/io.go:388
os.genericWriteTo(0x56afefc0, {0x3a095550, 0x56826a10})
os/file.go:275 +0x6f fp=0x568cb4d0 sp=0x568cb49c pc=0x813d80f
os.(*File).WriteTo(0x56afefc0, {0x3a095550, 0x56826a10})
os/file.go:253 +0x61 fp=0x568cb4f0 sp=0x568cb4d0 pc=0x813d741
io.copyBuffer({0x3a095550, 0x56826a10}, {0x8808ec0, 0x56afefc0}, {0x0, 0x0, 0x0})
io/io.go:411 +0x186 fp=0x568cb53c sp=0x568cb4f0 pc=0x812c036
io.Copy(...)
io/io.go:388
cmd/go/internal/cache.FileHash({0x56ae6a00, 0x20})
cmd/go/internal/cache/hash.go:165 +0x26a fp=0x568cb5d8 sp=0x568cb53c pc=0x8228afa
cmd/go/internal/work.(*Builder).fileHash(0x56874230, {0x56ae6a00, 0x20})
cmd/go/internal/work/buildid.go:402 +0x37 fp=0x568cb62c sp=0x568cb5d8 pc=0x8580c17
cmd/go/internal/work.(*Builder).buildActionID(0x56874230, 0x56af2248)
cmd/go/internal/work/exec.go:397 +0x207d fp=0x568cb830 sp=0x568cb62c pc=0x858749d
cmd/go/internal/work.(*Builder).build(0x56874230, {0x880c2a0, 0x8b7c060}, 0x56af2248)
cmd/go/internal/work/exec.go:475 +0x2a2 fp=0x568cbe84 sp=0x568cb830 pc=0x8587d32
cmd/go/internal/work.(*buildActor).Act(0x56ac1998, 0x56874230, {0x880c2a0, 0x8b7c060}, 0x56af2248)
cmd/go/internal/work/action.go:461 +0x33 fp=0x568cbea0 sp=0x568cbe84 pc=0x8577133
cmd/go/internal/work.(*Builder).Do.func3({0x880c2a0, 0x8b7c060}, 0x56af2248)
cmd/go/internal/work/exec.go:153 +0x7fd fp=0x568cbf88 sp=0x568cbea0 pc=0x858510d
cmd/go/internal/work.(*Builder).Do.func4()
cmd/go/internal/work/exec.go:222 +0xae fp=0x568cbff0 sp=0x568cbf88 pc=0x85847ae
runtime.goexit({})
runtime/asm_386.s:1393 +0x1 fp=0x568cbff4 sp=0x568cbff0 pc=0x80beec1
created by cmd/go/internal/work.(*Builder).Do in goroutine 1
cmd/go/internal/work/exec.go:208 +0x37a
goroutine 13 gp=0x56ac77a8 m=nil [GC worker (idle)]:
runtime.gopark(0x8754b10, 0x56a9fd10, 0x1a, 0xa, 0x0)
runtime/proc.go:435 +0xfa fp=0x56840f8c sp=0x56840f78 pc=0x80b94fa
runtime.gcBgMarkWorker(0x56a15c80)
runtime/mgc.go:1362 +0xeb fp=0x56840fe8 sp=0x56840f8c pc=0x806327b
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1278 +0x21 fp=0x56840ff0 sp=0x56840fe8 pc=0x8063171
runtime.goexit({})
runtime/asm_386.s:1393 +0x1 fp=0x56840ff4 sp=0x56840ff0 pc=0x80beec1
created by runtime.gcBgMarkStartWorkers in goroutine 8
runtime/mgc.go:1278 +0x114
eax 0x4
ebx 0x463c12b8
ecx 0x0
edx 0x0
edi 0x98
esi 0x43de7904
ebp 0x45a5a9f0
esp 0x45a5a9dc
eip 0x26428f5b
eflags 0x247
cs 0x2b
fs 0x5b
gs 0x63
trace_test.go:616: exit status 2
```
CC @mknyszek, @prattmic, @golang/runtime, and @golang/openbsd for visibility | help wanted,OS-OpenBSD,Builders,NeedsFix,compiler/runtime | low | Critical |
2,523,268,063 | kubernetes | Currently a number of files have OWNERS mapped to sig-api-machinery but they should be mapped to new sig-etcd | ### What would you like to be added?
Currently a number of files have OWNERS mapped to sig-api-machinery but they should be mapped to new sig-etcd. Currently this is impacting PR review and triage as etcd related PRs get triaged to api-machinery incorrectly. Ex:
- https://github.com/kubernetes/kubernetes/pull/127283
- https://github.com/kubernetes/kubernetes/pull/127285
This issue tracks updating the currently incorrect OWNERS files and removing sig-api-machinery and adding sig-etcd.
The solution here would likely be:
- replacing sig/api-machinery w/ sig/etcd for directories that should be owned by sig/etcd (already done IIUC in https://github.com/kubernetes/kubernetes/pull/125679)
- removing sig/api-machinery from anything that is currently jointly owned which should be fully owned by sig/etcd
- use `filters:` (see https://www.kubernetes.dev/docs/guide/owners/#owners-spec, in repo OWNERS example here: https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/OWNERS#L6) to make specific etcd files (eg: /hack/libe/etcd.sh) to sig/etcd
### Why is this needed?
This is needed as currently PR review and triage for etcd related issues are mapped to api-machinery incorrectly. | sig/api-machinery,kind/feature,triage/accepted | low | Minor |
2,523,289,269 | flutter | [video_player] Add video/stream concatenation support | ### Use case
Hey Flutter team,
We would like to play multiple HLS streams in order(as playlist) with smooth transition between them. At this moment Video Player supports playing only one video and preloading next stream via other controller is missing few frames.
### Proposal
ExoPlayer on android already supports it via [concatenatingMediaSource2](https://developer.android.com/reference/androidx/media3/exoplayer/source/ConcatenatingMediaSource2), it would be nice to have support on flutter too. | c: new feature,p: video_player,package,c: proposal,team-ecosystem,P2,triaged-ecosystem | low | Minor |
2,523,315,938 | tauri | [bug] availableMonitors returns wrong position on macos~ | ### Describe the bug
Here are my three monitors:
<img width="602" alt="image" src="https://github.com/user-attachments/assets/d8825595-b711-4214-8da3-9d2a055d2dbf">
Here is my code:
```javascript
//vue file
import {
availableMonitors
} from "@tauri-apps/api/window";
onMounted(async () => {
const pm = await availableMonitors();
console.log(pm);
});
```
Here is the results:

### Reproduction
simple code.
### Expected behavior
Is it a bug?
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.6.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.5.1
- pnpm: 9.6.0
- yarn: 1.22.22
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.0.0-rc.10
- tauri-build 🦀: 2.0.0-rc.9
- wry 🦀: 0.43.1
- tao 🦀: 0.30.0
- @tauri-apps/api : 2.0.0-rc.4
- @tauri-apps/cli : 2.0.0-rc.13
[-] Plugins
- tauri-plugin-dialog 🦀: 2.0.0-rc.5
- @tauri-apps/plugin-dialog : 2.0.0-rc.1
- tauri-plugin-store 🦀: 2.0.0-rc.3
- @tauri-apps/plugin-store : 2.0.0-rc.1
- tauri-plugin-fs 🦀: 2.0.0-rc.3
- @tauri-apps/plugin-fs : 2.0.0-rc.2
- tauri-plugin-shell 🦀: 2.0.0-rc.3
- @tauri-apps/plugin-shell : 2.0.0-rc.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React (Next.js)
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: macOS,status: needs triage | low | Critical |
2,523,327,452 | pytorch | torch.compile with mode = "max-autotune" breaks when starting from inference_mode | ### 🐛 Describe the bug
Hi, it looks like compiling model in `inference_mode` can break subsequent compilations of the same model in training mode.
Here is an example:
```python
import torch
layer = torch.nn.Linear(32, 64).bfloat16().cuda()
ex = torch.randn(8, 32).bfloat16().cuda()
layer = torch.compile(layer, mode="max-autotune")
with torch.inference_mode():
for _ in range(3):
res = layer(ex)
print("value:", res.mean().item())
for _ in range(3):
res = layer(ex)
res.mean().backward()
print("value:", res.mean().item())
```
when I run this code, I get the following error:
```
value: 0.072265625
value: 0.072265625
value: 0.072265625
value: 0.072265625
Traceback (most recent call last):
File "test_compile.py", line 14, in <module>
res = layer(ex)
File "/home/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
File "/home/lib/python3.8/site-packages/torch/_dynamo/external_utils.py", line 36, in inner
@functools.wraps(fn)
File "/home/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 600, in _fn
return fn(*args, **kwargs)
File "/home/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 987, in forward
return compiled_fn(full_args)
File "/home/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 204, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/home/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/utils.py", line 120, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/utils.py", line 94, in g
return f(*args)
File "/home/lib/python3.8/site-packages/torch/autograd/function.py", line 574, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1451, in forward
fw_outs = call_func_at_runtime_with_args(
File "/home/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/utils.py", line 120, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 451, in wrapper
return compiled_fn(runtime_args)
File "/home/lib/python3.8/site-packages/torch/_inductor/codecache.py", line 1131, in __call__
return self.current_callable(inputs)
File "/home/lib/python3.8/site-packages/torch/_inductor/compile_fx.py", line 993, in run
return compiled_fn(new_inputs)
File "/home/lib/python3.8/site-packages/torch/_inductor/cudagraph_trees.py", line 360, in deferred_cudagraphify
return fn(inputs)
File "/home/lib/python3.8/site-packages/torch/_inductor/compile_fx.py", line 944, in run
return model(new_inputs)
File "/home/lib/python3.8/site-packages/torch/_inductor/cudagraph_trees.py", line 1841, in run
out = self._run(new_inputs, function_id)
File "/home/lib/python3.8/site-packages/torch/_inductor/cudagraph_trees.py", line 1972, in _run
return self.record_function(new_inputs, function_id)
File "/home/lib/python3.8/site-packages/torch/_inductor/cudagraph_trees.py", line 2003, in record_function
node = CUDAGraphNode(
File "/home/lib/python3.8/site-packages/torch/_inductor/cudagraph_trees.py", line 927, in __init__
] = self._record(wrapped_function.model, recording_inputs)
File "/home/lib/python3.8/site-packages/torch/_inductor/cudagraph_trees.py", line 1155, in _record
with preserve_rng_state(), torch.cuda.device(
File "/home/lib/python3.8/site-packages/torch/cuda/graphs.py", line 180, in __enter__
self.cuda_graph.capture_begin(
File "/home/lib/python3.8/site-packages/torch/cuda/graphs.py", line 72, in capture_begin
super().capture_begin(pool=pool, capture_error_mode=capture_error_mode)
RuntimeError: Inplace update to inference tensor outside InferenceMode is not allowed.You can make a clone to get a normal tensor before doing inplace update.See https://github.com/pytorch/rfcs/pull/17 for more details.
```
switching from `torch.inference_mode()` to `torch.no_grad()` seems to fix this issue.
It also looks like this matters only during first compilation - i.e. I can run successfully when the model starts in `no_grad` and then interleaves between training and inference modes.
I tried digging in deeper but the error occurs after the first training step once cudagraph starts being executed, which makes it challenging to debug...
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.8.20 (default, Sep 9 2024, 22:12:42) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.2.5
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 5200.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1: Not affected
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.4.1
[pip3] torchmetrics==1.4.1
[pip3] torchvision==0.19.1
[pip3] triton==3.0.0
[conda] Could not collect
cc @mcarilli @ezyang @eellison @penguinwu @chauhang | triaged,module: cuda graphs,inference mode,oncall: pt2 | low | Critical |
2,523,338,499 | godot | Imported assets are reimported when opening the project in Godot 4.3-stable, Godot 4.4-dev2, and master | ### Tested versions
Reproduces in
v4.3.stable.official [77dcf97d8]
v4.4.dev2.official [97ef3c837]
v4.4.dev.custom_build [83d54ab2a]
Does not reproduce in
v4.2.2.stable.official [15073afe3]
### System information
Ubuntu 20.04 LTS
### Issue description
See title.
### Steps to reproduce
Godot 4.3-stable:
1. Create a new project with Godot 4.3-stable, notice `icon.svg` is imported
2. Close the editor
3. Edit `icon.svg.import` and change `importer="texture"` to `importer="keep"`
4. bug: open the project, notice that `icon.svg` is imported again
Godot 4.4-dev2 and master:
1. Create a new project with Godot 4.4-dev2, notice `icon.svg` is imported
2. Close the editor
3. bug: open the project, notice that `icon.svg` is imported again
### Minimal reproduction project (MRP)
N/A | bug,topic:import | low | Critical |
2,523,395,163 | vscode | Accurate color values | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Please use color formats in a modern way. Currently the standard color picker shows this:

This was accurate but is no longer. In a modern editor I expect the color value ( in this case ) to be presented like `rgb(0 189 126 / 0.2)` or even better `rgb(0 189 126 / 20%)`
Rationale:
- Nobody needs the extra commas
- in all browsers up and down arrows change values by +1 and -1. Of course it is possible to use alt or shift keys to change the magnitude but that is not needed.
I believe this topic has been raised many times here.
| feature-request,editor-color-picker | low | Major |
2,523,425,284 | rust | Trait associated constants can have defaults while types cant | The following is valid rust code:
```rust
trait Trait {
const ASSOC: usize = 10;
}
```
When we start allowing associated constants to enter the type system we should ensure that this does not cause any unwanted interactions, associated type defaults is not stable and so it is not necessarily true that associated constants "obviously" work as well as associated types since they have this addition flexibility.
| T-compiler,A-const-generics,F-min_generic_const_args | low | Minor |
2,523,437,921 | godot | GraphNode connection drawing glitch if control of connected slot is hidden and the GraphNode is resized | ### Tested versions
-Reproducable in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 Ti (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz (12 Threads)
### Issue description
https://www.youtube.com/watch?v=z2Km4B4-ul4
[](https://youtu.be/z2Km4B4-ul4)
### Steps to reproduce
1. Create 2 GraphNodes
2. Add 1 control to each and enable the slots
3. connect the slots
4. set a control node to "hidden"
5. resize the GraphEdit
6. move the GraphEdit
### Minimal reproduction project (MRP)
N/A | bug,topic:gui | low | Minor |
2,523,502,688 | vscode | Minimap MARK comments are incorrectly positioned on the minimap when Minimap Size is "full" and file is long | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.1
- OS Version: Windows 10.0.22631
Steps to Reproduce:
1. In preferences, set Minimap Size to "full"
2. Open a very long file (1500+ lines should be plenty)
3. Add a mark comment to the file (eg. `// MARK: some label here`)
4. Observe the offset in the minimap between the line the mark comment is on, and the highlighted minimap location.
Example, where the actual comment location is circled, but the "Example comment" text is shown on a distant line:

| bug,editor-minimap | low | Critical |
2,523,527,118 | deno | Detect server restart with --watch ? | ### Discussed in https://github.com/denoland/deno/discussions/25601
<div type='discussions-op-text'>
<sup>Originally posted by **alexgleason** September 12, 2024</sup>
I am trying to call `pglite.close()` before the process exits to avoid corrupting my database, but every time I edit my code with `--watch` it gets corrupted and I have to delete/recreate my database.
I used `Deno.addSignalListener('SIGINT', ...)` to clean up (and also added `SIGTERM`, `SIGHUP`, etc), but I can't figure out how to detect **when the server will restart due to a code change when using --watch**. I need to also call my cleanup function in that case. I think? I'm not completely sure if this is the issue or I'm running into a different bug.
Any ideas?</div> | bug,needs investigation,needs discussion,--watch | low | Critical |
2,523,565,532 | PowerToys | PowerToys Workspace can't detect Blender and Toon boom harmony premium | ### Microsoft PowerToys version
0.84.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Opening Blender (installed from website and runs as user level) or Toon boom harmony (this app runs as administrator) in minimized, maximized and floating window.
### ✔️ Expected Behavior
When clicking on "Capture" in Snapshot Creator, it should capture the mentioned apps window.
### ❌ Actual Behavior
It won't capture the mentioned apps window in any mode (minimized, maximized and floating)

### Other Software
Blender 4.0.1 (installed from website)
Toon boom harmony premium 21.1.0 | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,523,613,449 | PowerToys | run s problem | ### Microsoft PowerToys version
0.84.1
### Installation method
GitHub
### Running as admin
None
### Area(s) with issue?
PowerToys Run
### Steps to reproduce

### ✔️ Expected Behavior
cursor match the first file that appears
### ❌ Actual Behavior
When using file search, the cursor does not match the first file that appears
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,523,649,518 | pytorch | [Discuss] Enable Windows inductor UTs and fix its timeout issue. | ### 🐛 Describe the bug
I have enabled Windows inductor on `main` branch and `release/2.5` branch now. We also get good quality in models passrate: https://github.com/pytorch/pytorch/issues/124245#issuecomment-2333511349
But we still have a problem that, we can't enable Windows inductor UTs, due to it always timeout by 210 minutes.
Actually, I tried to enable Windows UT in PR: https://github.com/pytorch/pytorch/pull/134553 and in this PR:
1. enable all dynamo UTs: https://github.com/pytorch/pytorch/pull/134553/files#diff-492e5c4bdfc58c4cb5ecf00770e9c63a0ff31b02bde41d68e4cdc70eaf2f4ac7R23
2. Add Windows test shard to 5 machines.
but, it always timeout, and @chuanqi129 did some research and comment is we need to split current inductor UTs: https://github.com/pytorch/pytorch/pull/133226#issuecomment-2296861647
I'm not sure how to spilt the UTs, or we can disable(reduce) some UTs on Windows?
So I opened this issue to ask help to other developers.
We may need cherry-pick the solution PR to `release/2.5`.
### Versions
On `main` branch and `release/2.5` branch.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | module: windows,triaged,oncall: pt2,module: inductor | low | Critical |
2,523,677,958 | vscode | Unable to connect to tunnel after code update | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
```
Version: 1.94.0-insider
Commit: 102ff8db3f8dd54027407279ed5cb78e81b4bf19
Date: 2024-09-12T08:48:43.150Z
Browser: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36 Edg/128.0.0.0
```
- OS Version: code tunnel / arm64
Steps to Reproduce:
1. update cli


2. Unable to connect to tunnel after refresh page

3. use `code tunnel service install` to re-install still not work.
4. `code tunnel service uninstall`, then `code tunnel service install` works
| bug,remote-tunnel | low | Critical |
2,523,689,476 | pytorch | Integer overflow while creating nested tensors | ### 🐛 Describe the bug
Hi,
Not sure if I'm using nested tensors incorrectly, but I would like to pad some variable-length sequences and feed the resulting padded tensor into a DataLoader.
```py
torch.nested.to_padded_tensor(
torch.nested.nested_tensor(tensor_list), 0
)
```
This approach was working great while developing a model, but I have been scaling up the input data and was hit with the following issue:
```
----> 1 torch.nested.nested_tensor(tensor_list)
File .venv\lib\site-packages\torch\nested\__init__.py:220, in nested_tensor(tensor_list, dtype, layout, device, requires_grad, pin_memory)
219 if layout == torch.strided:
--> 220 return _nested.nested_tensor(
221 tensor_list,
222 dtype=dtype,
223 device=device,
224 requires_grad=requires_grad,
225 pin_memory=pin_memory)
226 elif layout == torch.jagged:
227 # Need to wrap lists of scalars as tensors
228 list_of_tensors = [t if isinstance(t, Tensor) else torch.as_tensor(t) for t in tensor_list]
RuntimeError: Trying to create tensor with negative dimension -1382983936: [-1382983936]
```
All of the input data is well-formed and properly typed (tensors of size (N,1280)). The negative dimension given corresponds to the summed number of elements in the `tensor_list` (N items * dim 0 * dim 1) and overflowing from a u32 -> i32 cast.
Code to reproduce
```py
import torch
jagged_lengths = torch.randint(512, 1024, (3000,))
if (jagged_lengths.sum().item() * 1280 * 3000) > 2_147_483_647:
tensor_list = []
for length in jagged_lengths:
tensor_list.append(torch.rand((length, 1280)))
torch.nested.nested_tensor(tensor_list)
```
### Versions
torch==2.4.1+cu124
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: nestedtensor | low | Critical |
2,523,698,213 | godot | Texture shader parameter is missing reset button. | ### Tested versions
- latest 4.4
### System information
Godot v4.3.stable - Ubuntu 24.04.1 LTS 24.04 - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 (RADV NAVI23) - 12th Gen Intel(R) Core(TM) i5-12400F (12 Threads)
### Issue description
When u set a `Texture` in a `StandardMaterial3D` you get a reset button.

When u set a `Texture` in a `ShaderMaterial` parameter you don't.

Other (some/most/all?) types of shader params have the correct reset button.
Similar to https://github.com/godotengine/godot/issues/95953
Probably related to this code:
```
bool ShaderMaterial::_property_can_revert(const StringName &p_name) const {
if (shader.is_valid()) {
const StringName *pr = remap_cache.getptr(p_name);
if (pr) {
Variant default_value = RenderingServer::get_singleton()->shader_get_parameter_default(shader->get_rid(), *pr);
Variant current_value = get_shader_parameter(*pr);
return default_value.get_type() != Variant::NIL && default_value != current_value;
} else if (p_name == "render_priority" || p_name == "next_pass") {
return true;
}
}
return false;
}
```
Thanks.
### Steps to reproduce
- set texture to shader param in editor
### Minimal reproduction project (MRP)
https://github.com/rakkarage/testtextureshaderparameter | bug,topic:editor,confirmed,topic:shaders | low | Major |
2,523,742,481 | ollama | Windows Portable Mode | I would like to see a Full Portable version of Ollama for Windows, not just having the binary files without running the setup.
My proposal is simply to have the same files as the installer version, and also include a portable.txt file to indicate that it is a portable install and to directly save Ollama settings, history, models etc into a data folder inside the portable build instead of AppData and the User Home folder.
For updates clicking the notification can just open the link to the zip file for manual installation/updates or automatically download and on click open in the default zip program.
| feature request | medium | Major |
2,523,745,406 | svelte | A11y: add attribute hints for `role="tab"` and `role="tabpanel"` | ### Describe the problem
It would be nice to have a11y hints when constructing a tabbed interface via `role="tab"` and `role="tabpanel"`.
### Describe the proposed solution
Complete example: https://svelte.dev/repl/cb3364c7b97842be901b42dd2d597fe5?version=4.2.19
Requirements to satisfy `role="tab"`:
- Tab must include `aria-controls` that match the `id` of its respective tab panel.
- Tab must include `aria-selected` to indicate selected tab.
- Tab must include `onkeydown` event listener such that left and right arrow keys can switch between tabs (up to the developer whether to immediately select tab upon switch); and up and down arrow keys should blur focus and scroll up or down the page respectively.
- If element containing `role="tab"` is inherently non-interactive, it must include necessary attributes for keyboard accessibility or switch to an interactive element such as `<button>` or `<input type="radio" />`
Requirements to satisfy `role="tabpanel"`:
- Tab panel must include `aria-labelledby` to match the `id` of its respective tab.
- Panel must include `tabindex="0"` to ensure it is the next focusable element after tabs.
Reference: https://www.w3.org/WAI/ARIA/apg/patterns/tabs/
### Importance
nice to have | a11y | low | Minor |
2,523,753,412 | tauri | [bug] Can't run tauri-cli in MacOS | ### Describe the bug
```
$ cargo tauri
dyld[66862]: Library not loaded: @rpath/libbz2.1.dylib
Referenced from: <3E6145E4-18DE-3872-A4F4-8F2ED3EF6481> /Users/notsee/.cargo/bin/cargo-tauri
Reason: tried: '/Users/notsee/.rustup/toolchains/stable-aarch64-apple-darwin/lib/libbz2.1.dylib' (no such file), '/Users/notsee/lib/libbz2.1.dylib' (no such file), '/usr/local/lib/libbz2.1.dylib' (no such file), '/usr/lib/libbz2.1.dylib' (no such file, not in dyld cache)
[1] 66862 abort cargo tauri
```
and yes, I've followed the [prerequisites for MacOS](https://tauri.app/v1/guides/getting-started/prerequisites#setting-up-macos).
I'm running MacOS Sonoma 14.6.1 in MacBook Air M2
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
$ cargo tauri info
dyld[66886]: Library not loaded: @rpath/libbz2.1.dylib
Referenced from: <3E6145E4-18DE-3872-A4F4-8F2ED3EF6481> /Users/notsee/.cargo/bin/cargo-tauri
Reason: tried: '/Users/notsee/.rustup/toolchains/stable-aarch64-apple-darwin/lib/libbz2.1.dylib' (no such file), '/Users/notsee/lib/libbz2.1.dylib' (no such file), '/usr/local/lib/libbz2.1.dylib' (no such file), '/usr/lib/libbz2.1.dylib' (no such file, not in dyld cache)
[1] 66886 abort cargo tauri info
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: macOS,status: needs triage | low | Critical |
2,523,774,544 | kubernetes | Terminated pod is stuck in preStop hook | ### What happened?
Pod cannot be fully removed even if it's graceDeletionPeriod has been 0. It's stuck in preStop hook.
### What did you expect to happen?
Remove terminated pod
### How can we reproduce it (as minimally and precisely as possible)?
1. deploy a nginx
```
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2024-09-13T01:42:11Z"
generation: 2
labels:
app: test
name: test
namespace: default
resourceVersion: "14464340"
uid: 4eac36d2-dda1-40b1-9a1a-557e73a8b6f4
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: test
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: test
spec:
containers:
- image: nginx
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 10000000
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 300
```
2. delete the pod
```
kubectl delete pod test-54f748667b-26cnm
```
3. delete the pod with a short grace period
```
kubectl delete pod test-54f748667b-26cnm --grace-period=1
```
4. we have to wait 5min and then the pod will be fully removed.
### Anything else we need to know?
If we restart the kubelet, the pod will immediately removed.
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
Client Version: v1.30.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2
```
</details>
### Cloud provider
<details>
kind
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,priority/important-longterm,triage/accepted | medium | Major |
2,523,780,721 | PowerToys | In the Chinese environment, the command function was not found, and the installation log was garbled | ### Microsoft PowerToys version
0.84.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Command not found
### Steps to reproduce
Open pt, enter the command not found, prompt me to install PowerShell, I click install, it is stuck, and then the following installation log shows some messy Chinese, please fix it as soon as possible.

### ✔️ Expected Behavior
I hope that the Chinese in the installation log will be displayed correctly
### ❌ Actual Behavior
Chinese garbled characters
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,523,800,337 | ollama | Ollama run says "A model with that name already exists" but really its a casing issue? | ### What is the issue?
I don't know how to explain this exactly, but when I try to run `ollama run Llama3.1` I get the confusing error:
```
Error: a model with that name already exists
```
And it *does* exist, but the issue is that the casing is different (it's `llama3.1`, not `Llama3.1`). This is evidently confusing the engine somehow. If I give the lowercase version then its fine and loads normally, and if I give a name that doesn't exist then I get `Error: pull model manifest: file does not exist`.
This is running on Ubuntu 24.04, x64.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10 | bug | low | Critical |
2,523,824,202 | pytorch | Torch Inductor Windows Path Escape Characters | ### 🐛 Describe the bug
---
## **### Bug Description**
When using `torch.compile` with the Inductor backend on a Windows system, Torch Inductor generates temporary Python files that include Windows-style file paths with unescaped backslashes (`\`). This results in malformed Unicode escape sequences (e.g., `\U`) within string literals, causing Python to raise a `SyntaxError` during the import of these generated files. Consequently, the compilation process fails, preventing the successful execution of the compiled model or function.
**Key Points:**
- **Path Handling Issue:** Temporary files generated by Torch Inductor contain Windows paths with single backslashes, which are incorrectly interpreted as escape characters in Python strings.
- **Resulting Error:** Malformed Unicode escape sequences lead to `SyntaxError`, disrupting the compilation process.
- **Impact:** Prevents the use of `torch.compile` with the Inductor backend on Windows, hindering performance optimizations and deployment workflows.
**Expected Behavior:**
Torch Inductor should correctly handle Windows-style file paths by properly escaping backslashes, using raw strings, or adopting forward slashes to ensure that generated Python code is syntactically valid and executable without errors.
---
### Malformed generated code
```python
# AOT ID: ['0_inference']
from ctypes import c_void_p, c_long
import torch
import math
import random
import os
import tempfile
from math import inf, nan
from torch._inductor.hooks import run_intermediate_hooks
from torch._inductor.utils import maybe_profile
from torch._inductor.codegen.memory_planning import _align as align
from torch import device, empty_strided
from torch._inductor.async_compile import AsyncCompile
from torch._inductor.select_algorithm import extern_kernels
from torch._inductor.codegen.multi_kernel import MultiKernelCall
aten = torch.ops.aten
inductor_ops = torch.ops.inductor
_quantized = torch.ops._quantized
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
empty_strided_cpu = torch._C._dynamo.guards._empty_strided_cpu
empty_strided_cuda = torch._C._dynamo.guards._empty_strided_cuda
reinterpret_tensor = torch._C._dynamo.guards._reinterpret_tensor
alloc_from_pool = torch.ops.inductor._alloc_from_pool
async_compile = AsyncCompile()
cpp_fused__to_copy_0 = async_compile.cpp_pybinding(['float*'], '''
#include "C:\Users\sigur\AppData\Local\Temp\torchinductor_sigur\sk\cskh5dx62fglpphcrl6723dnmowdabouerrzy3dmqcngbxwfa7bv.h"
extern "C" void kernel(float* out_ptr0)
{
#pragma omp parallel num_threads(16)
{
int tid = omp_get_thread_num();
{
#pragma omp for
for(long x0=static_cast<long>(0L); x0<static_cast<long>(160L); x0+=static_cast<long>(1L))
{
#pragma GCC ivdep
for(long x1=static_cast<long>(0L); x1<static_cast<long>(475L); x1+=static_cast<long>(1L))
{
auto tmp0 = (-1L)*x0;
auto tmp1 = c10::convert<float>(tmp0);
auto tmp2 = static_cast<float>(0.00625);
auto tmp3 = decltype(tmp1)(tmp1 * tmp2);
auto tmp4 = c10::convert<double>(tmp3);
auto tmp5 = (-17L) + x1;
auto tmp6 = c10::convert<double>(tmp5);
auto tmp7 = static_cast<double>(0.0022675736961451248);
auto tmp8 = decltype(tmp6)(tmp6 * tmp7);
auto tmp9 = decltype(tmp4)(tmp4 + tmp8);
auto tmp10 = static_cast<double>(158.4);
auto tmp11 = decltype(tmp9)(tmp9 * tmp10);
auto tmp12 = static_cast<double>(-6.0);
auto tmp13 = max_propagate_nan(tmp11, tmp12);
auto tmp14 = static_cast<double>(6.0);
auto tmp15 = min_propagate_nan(tmp13, tmp14);
auto tmp16 = static_cast<double>(3.141592653589793);
auto tmp17 = decltype(tmp15)(tmp15 * tmp16);
auto tmp18 = static_cast<double>(0.0);
auto tmp19 = tmp17 == tmp18;
auto tmp20 = std::sin(tmp17);
auto tmp21 = tmp20 / tmp17;
auto tmp22 = static_cast<double>(1.0);
auto tmp23 = tmp19 ? tmp22 : tmp21;
auto tmp24 = static_cast<double>(0.16666666666666666);
auto tmp25 = decltype(tmp17)(tmp17 * tmp24);
auto tmp26 = static_cast<double>(0.5);
auto tmp27 = decltype(tmp25)(tmp25 * tmp26);
auto tmp28 = std::cos(tmp27);
auto tmp29 = decltype(tmp28)(tmp28 * tmp28);
auto tmp30 = static_cast<double>(0.3591836734693878);
auto tmp31 = decltype(tmp29)(tmp29 * tmp30);
auto tmp32 = decltype(tmp23)(tmp23 * tmp31);
auto tmp33 = c10::convert<float>(tmp32);
out_ptr0[static_cast<long>(x1 + (475L*x0))] = tmp33;
}
}
}
}
}
''')
async_compile.wait(globals())
del async_compile
def call(args):
buf1 = empty_strided_cpu((160, 1, 475), (475, 475, 1), torch.float32)
cpp_fused__to_copy_0(buf1)
return (buf1, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
fn = lambda: call([])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
```
### Code that leads to malformed generated code
```python
import os
import argparse
import torch
# import librosa
import time
from scipy.io.wavfile import write
import torch.amp
import torchaudio
from tqdm import tqdm
import utils
from models import SynthesizerTrn
from mel_processing import mel_spectrogram_torch
from speaker_encoder.voice_encoder import SpeakerEncoder
import logging
logging.getLogger('numba').setLevel(logging.WARNING)
# torch.backends.cudnn.benchmark = True
# torch.backends.cudnn.allow_tf32 = True
def load_models(args):
hps = utils.get_hparams_from_file(args.hpfile)
print("Loading model...")
net_g = SynthesizerTrn(
hps.data.filter_length // 2 + 1,
hps.train.segment_size // hps.data.hop_length,
**hps.model).cuda()
_ = net_g.eval()
print("Loading checkpoint...")
_ = utils.load_checkpoint(args.ptfile, net_g, None)
net_g = torch.compile(net_g)
print("Loading WavLM for content...")
cmodel = utils.get_cmodel(0).cuda()
cmodel = torch.compile(cmodel)
if hps.model.use_spk:
print("Loading speaker encoder...")
smodel = SpeakerEncoder('speaker_encoder/ckpt/pretrained_bak_5805000.pt').cuda()
smodel = torch.compile(smodel)
return hps, net_g, cmodel, smodel
@torch.compile
def process_wav(hps, net_g, cmodel, smodel, src, tgt):
with torch.no_grad():
wav_tgt, o_sr = torchaudio.load(tgt)
if o_sr != hps.data.sampling_rate:
wav_tgt = torchaudio.transforms.Resample(o_sr, hps.data.sampling_rate)(wav_tgt)
wav_tgt = wav_tgt.squeeze(0).numpy()
if hps.model.use_spk:
g_tgt = smodel.embed_utterance(wav_tgt)
g_tgt = torch.from_numpy(g_tgt).unsqueeze(0).cuda()
else:
wav_tgt = torch.from_numpy(wav_tgt).unsqueeze(0).cuda()
mel_tgt = mel_spectrogram_torch(
wav_tgt,
hps.data.filter_length,
hps.data.n_mel_channels,
hps.data.sampling_rate,
hps.data.hop_length,
hps.data.win_length,
hps.data.mel_fmin,
hps.data.mel_fmax
)
wav_src, o_sr = torchaudio.load(src)
if o_sr != hps.data.sampling_rate:
wav_src = torchaudio.transforms.Resample(o_sr, hps.data.sampling_rate)(wav_src)
wav_src = wav_src.cuda()
c = utils.get_content(cmodel, wav_src)
if hps.model.use_spk:
audio = net_g.infer(c, g=g_tgt)
else:
audio = net_g.infer(c, mel=mel_tgt)
audio = audio[0][0].data.cpu().float().numpy()
return audio
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--hpfile", type=str, default="configs/freevc-24.json", help="path to json config file")
parser.add_argument("--ptfile", type=str, default="checkpoints/freevc-24.pth", help="path to pth file")
parser.add_argument("--tgt_path", type=str, default="voices/audio7.wav", help="path to target audio file")
args = parser.parse_args()
src = "voices/audio.wav"
tgt = args.tgt_path
os.makedirs("outputs", exist_ok=True)
hps, net_g, cmodel, smodel = load_models(args)
process_wav(hps, net_g, cmodel, smodel, src, tgt)
start = time.time()
for i in range(100):
audio = process_wav(hps, net_g, cmodel, smodel, src, tgt)
print("Mean time:", (time.time() - start) / 1_000)
write(os.path.join("outputs", f"out.wav"), 24000, audio)
```
### Error logs
DEBUG:filelock:Attempting to acquire lock 2022744139744 on C:\Users\sigur\AppData\Local\Temp\torchinductor_sigur\locks\co2konkofj5rz6476x7nz6loyxdmxxkh3tc44gkjkzynhrr7w5o5.lock
DEBUG:filelock:Lock 2022744139744 acquired on C:\Users\sigur\AppData\Local\Temp\torchinductor_sigur\locks\co2konkofj5rz6476x7nz6loyxdmxxkh3tc44gkjkzynhrr7w5o5.lock
DEBUG:filelock:Attempting to release lock 2022744139744 on C:\Users\sigur\AppData\Local\Temp\torchinductor_sigur\locks\co2konkofj5rz6476x7nz6loyxdmxxkh3tc44gkjkzynhrr7w5o5.lock
DEBUG:filelock:Lock 2022744139744 released on C:\Users\sigur\AppData\Local\Temp\torchinductor_sigur\locks\co2konkofj5rz6476x7nz6loyxdmxxkh3tc44gkjkzynhrr7w5o5.lock
W0913 03:40:09.245000 68120 torch\_dynamo\repro\after_dynamo.py:110] [6/0] Compiled Fx GraphModule failed. Creating script to minify the error.
W0913 03:40:09.248000 68120 torch\_dynamo\debug_utils.py:277] [6/0] Writing minified repro to:
W0913 03:40:09.248000 68120 torch\_dynamo\debug_utils.py:277] [6/0] C:\Users\sigur\Desktop\shit\FreeVC\torch_compile_debug\run_2024_09_13_03_40_09_247190-pid_68116\minifier\minifier_launcher.py
Traceback (most recent call last):
File "C:\Users\sigur\Desktop\shit\FreeVC\live.py", line 100, in <module>
process_wav(hps, net_g, cmodel, smodel, src, tgt)
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\live.py", line 49, in process_wav
@torch.compile
File "C:\Users\sigur\Desktop\shit\FreeVC\live.py", line 52, in torch_dynamo_resume_in_process_wav_at_52
wav_tgt, o_sr = torchaudio.load(tgt)
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torchaudio\transforms\_transforms.py", line 957, in __init__
kernel, self.width = _get_sinc_resample_kernel(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 1116, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 948, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 472, in __call__
return _compile(
^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_utils_internal.py", line 84, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_strobelight\compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 817, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 636, in compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1185, in transform_code_object
transformations(instructions, code_options)
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 178, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 582, in transform
tracer.run()
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2451, in run
super().run()
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 893, in run
while self.step():
^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 805, in step
self.dispatch_table[inst.opcode](self, inst)
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2642, in RETURN_VALUE
self._return(inst)
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2627, in _return
self.output.compile_subgraph(
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\output_graph.py", line 1123, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "C:\Users\sigur\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\output_graph.py", line 1318, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\output_graph.py", line 1409, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\output_graph.py", line 1390, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 107, in __call__
compiled_gm = compiler_fn(copy.deepcopy(gm), example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\__init__.py", line 1951, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_inductor\compile_fx.py", line 1505, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\backends\common.py", line 69, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_functorch\aot_autograd.py", line 954, in aot_module_simplified
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_functorch\aot_autograd.py", line 687, in create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 168, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_inductor\compile_fx.py", line 1410, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\repro\after_aot.py", line 84, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_inductor\debug.py", line 304, in inner
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_inductor\compile_fx.py", line 527, in compile_fx_inner
compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_inductor\compile_fx.py", line 831, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_inductor\graph.py", line 1751, in compile_to_fn
return self.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_inductor\graph.py", line 1701, in compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_inductor\codecache.py", line 3073, in load_by_key_path
mod = _reload_python_module(key, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sigur\Desktop\shit\FreeVC\.venv\Lib\site-packages\torch\_inductor\runtime\compile_tasks.py", line 39, in _reload_python_module
raise RuntimeError(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Failed to import C:\Users\sigur\AppData\Local\Temp\torchinductor_sigur\2s\c2swhvevsvcfe3t7py66ebm444p2oq4p5iw2ywffmxngsqmqnok6.py
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 13-14: truncated \UXXXXXXXX escape (c2swhvevsvcfe3t7py66ebm444p2oq4p5iw2ywffmxngsqmqnok6.py, line 30)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
Minifier script written to C:\Users\sigur\Desktop\shit\FreeVC\torch_compile_debug\run_2024_09_13_03_40_09_247190-pid_68116\minifier\minifier_launcher.py. Run this script to find the smallest traced graph which reproduces this error.
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
### Minified repro
from math import inf
import torch
from torch import tensor, device
import torch.fx as fx
import torch._dynamo
from torch._dynamo.testing import rand_strided
from torch._dynamo.debug_utils import run_fwd_maybe_bwd
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
import torch.fx.experimental._config
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self):
arange = torch.arange(-17, 458, dtype = torch.float64, device = device(type='cpu'))
getitem = arange[(None, None)]; arange = None
idx = getitem / 441; getitem = None
arange_1 = torch.arange(0, -160, -1, dtype = None, device = device(type='cpu'))
getitem_1 = arange_1[(slice(None, None, None), None, None)]; arange_1 = None
truediv_1 = getitem_1 / 160; getitem_1 = None
t = truediv_1 + idx; truediv_1 = idx = None
t *= 158.4; t_1 = t; t = None
t_2 = t_1.clamp_(-6, 6); t_1 = None
mul = t_2 * 3.141592653589793
truediv_2 = mul / 6; mul = None
truediv_3 = truediv_2 / 2; truediv_2 = None
cos = torch.cos(truediv_3); truediv_3 = None
window = cos ** 2; cos = None
t_2 *= 3.141592653589793; t_3 = t_2; t_2 = None
eq = t_3 == 0
tensor = torch.tensor(1.0)
to = tensor.to(t_3); tensor = None
sin = t_3.sin()
truediv_4 = sin / t_3; sin = t_3 = None
kernels = torch.where(eq, to, truediv_4); eq = to = truediv_4 = None
mul_1 = window * 0.3591836734693878; window = None
kernels *= mul_1; kernels_1 = kernels; kernels = mul_1 = None
kernels_2 = kernels_1.to(dtype = torch.float32); kernels_1 = None
return (kernels_2,)
mod = Repro()
def load_args(reader):
load_args._version = 0
if __name__ == '__main__':
from torch._dynamo.repro.after_dynamo import run_repro
run_repro(mod, load_args, accuracy=False, command='minify',
save_dir='C:\\Users\\sigur\\Desktop\\shit\\FreeVC\\torch_compile_debug\\run_2024_09_13_03_40_09_247190-pid_68116\\minifier\\checkpoints', autocast=False, backend='inductor')
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: (x86_64-win32-seh-rev2, Built by MinGW-W64 project) 12.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3401
DeviceID=CPU0
Family=107
L2CacheSize=8192
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3401
Name=AMD Ryzen 9 5950X 16-Core Processor
ProcessorType=3
Revision=8450
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] torch==2.4.1+cu124
[pip3] torchaudio==2.4.1+cu124
[pip3] torchvision==0.19.1+cu124
[conda] Could not collect
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | module: windows,triaged,oncall: pt2,module: inductor | low | Critical |
2,523,859,270 | godot | Adding/Moving navigation links needs to be optimized | ### Tested versions
4.3-stable
### System information
Windows 11
### Issue description
Related to: https://github.com/godotengine/godot/issues/96483
Navigation links also suffer from having no form of BVH available during `sync`. All faces of all polygons have to be scanned not once but twice, because for some reason it first scans all polygons for the start position, and then for the end position. Merging these two loops into one already saves half the time spent here without any additional effort.
This also causes issues in editor when moving navigation links around making it way more painful than it needs to be. Especially because the `NavigationLink3D` will recompute its connections each physics tick while moving around. Again there is an easy optimization to just defer updating the position in the `NavigationServer3D` until the drag operation has completed.
### Steps to reproduce
Create a big navmesh with a lot of navigation links.
### Minimal reproduction project (MRP)
- | enhancement,discussion,topic:navigation,performance | low | Minor |
2,523,859,976 | neovim | :terminal close delayed on Windows | ## Problem
This problem results in a poor experience with terminal plugins for file navigation (such as fzf-lua, yazi.nvim). The problem manifests as a delay before jumping to the selected file, as shown in the video:
https://github.com/user-attachments/assets/9912ebee-8c76-4702-8160-dd73fe02d811
Here is the effect of the same configuration in my WSL:
https://github.com/user-attachments/assets/ebb2d432-0324-4f28-909f-4efa5e746e14
In WSL, there is no delay when switching files, whereas in Windows, there is a noticeable delay, sometimes close to one second. The same issue occurs with all terminals within nvim, such as fzf-lua, lazygit, and the terminal emulator inside nvim, where delays appear when closing them.
## Steps to reproduce
1. nvim --clean
2. Open terminal: :terminal
3. Enter insert mode in terminal: i
4. Type "exit" command in terminal: exit
5. Press enter key.
6. Wait about 0.3 seconds (if almost instantaneous in wsl).
7. See [Process exited 0].
## Expected behavior
It has the same effect as on linux, that is, the terminal can be shut down immediately.
## Versions
```
NVIM v0.10.1
Build type: Release
LuaJIT 2.1.1713484068
Run "nvim -V1 -v" for more info
```
## Vim (not Nvim) behaves the same?
no (substituting :term for :terminal), 9.1.0718
## Operating system/version
Windows 11 pro x86_64 23H2 22631.4112
## Terminal name/version
Windows Terminal 1.21.2361.0
## $TERM environment variable
TERM is absent from environment. CMD, Powershell core, and nushell all have this problem.
## Installation
winget | bug,platform:windows,job-control,terminal | low | Major |
2,523,884,986 | ui | [bug]: npx shadcn@latest add toast creating hooks folder outside components | ### Describe the bug
PS C:\Users\HP\Desktop\office\package-maker> npx shadcn@latest add toast
Need to install the following packages:
shadcn@2.0.6
Ok to proceed? (y) y
✔ Checking registry.
✔ Installing dependencies.
✔ Created 3 files:
- components\ui\toast.tsx
- hooks\use-toast.ts
- components\ui\toaster.tsx
- ----------------------------------------
Failed to compile
Next.js (14.2.5) out of date [(learn more)](https://nextjs.org/docs/messages/version-staleness)
./components/ui/toaster.tsx:3:1
Module not found: Can't resolve '@/components/hooks/use-toast'
1 | "use client"
2 |
> 3 | import { useToast } from "@/components/hooks/use-toast"
| ^
4 | import {
5 | Toast,
6 | ToastClose,
https://nextjs.org/docs/messages/module-not-found
### Affected component/components
Toast
### How to reproduce
1. npx shadcn@latest add toast
### Codesandbox/StackBlitz link
yes
### Logs
```bash
no
```
### System Info
```bash
windows 11
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,523,888,131 | yt-dlp | [chzzk:video] HTTP Error 400: Bad Request | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
South Korea
### Provide a description that is worded well enough to be understood
```
./yt-dlp -vU --print-traffic https://chzzk.naver.com/video/3420720
```
Trying download any vod fails with `yt_dlp.networking.exceptions.HTTPError: HTTP Error 400: Bad Request`
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--print-traffic', 'https://chzzk.naver.com/video/3420720']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.09.08.232909 from yt-dlp/yt-dlp-nightly-builds [d1c4d88b2] (zip)
[debug] Python 3.12.5 (CPython x86_64 64bit) - Linux-6.9.11-amd64-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.0.2-3 (setts), ffprobe 7.0.2-3
[debug] Optional libraries: Cryptodome-3.20.0, certifi-2024.06.02, requests-2.32.3, sqlite3-3.46.1, urllib3-2.0.7
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1832 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
director: Handler preferences for this request: urllib=0, requests=100
director: Checking if "requests" supports this request.
director: Sending request via "requests"
requests: Starting new HTTPS connection (1): api.github.com:443
send: b'GET /repos/yt-dlp/yt-dlp-nightly-builds/releases/latest HTTP/1.1\r\nHost: api.github.com\r\nConnection: keep-alive\r\nUser-Agent: yt-dlp\r\nAccept: application/vnd.github+json\r\nAccept-Language: en-us,en;q=0.5\r\nSec-Fetch-Mode: navigate\r\nX-Github-Api-Version: 2022-11-28\r\nAccept-Encoding: gzip, deflate\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Date: Fri, 13 Sep 2024 05:05:35 GMT
header: Content-Type: application/json; charset=utf-8
header: Cache-Control: public, max-age=60, s-maxage=60
header: Vary: Accept,Accept-Encoding, Accept, X-Requested-With
header: ETag: W/"4a777ebe329bea01e82fc28d674403b5599f2f2bc6f2c898dac538ae2d4f9ede"
header: Last-Modified: Sun, 08 Sep 2024 23:36:14 GMT
header: X-GitHub-Media-Type: github.v3; format=json
header: x-github-api-version-selected: 2022-11-28
header: Access-Control-Expose-Headers: ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset
header: Access-Control-Allow-Origin: *
header: Strict-Transport-Security: max-age=31536000; includeSubdomains; preload
header: X-Frame-Options: deny
header: X-Content-Type-Options: nosniff
header: X-XSS-Protection: 0
header: Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin
header: Content-Security-Policy: default-src 'none'
header: Content-Encoding: gzip
header: Server: github.com
header: X-RateLimit-Limit: 60
header: X-RateLimit-Remaining: 57
header: X-RateLimit-Reset: 1726207513
header: X-RateLimit-Resource: core
header: X-RateLimit-Used: 3
header: Accept-Ranges: bytes
header: Transfer-Encoding: chunked
header: X-GitHub-Request-Id: 906A:1B659A:10F23E1:125157B:66E3C81F
Latest version: nightly@2024.09.08.232909 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.09.08.232909 from yt-dlp/yt-dlp-nightly-builds)
[chzzk:video] Extracting URL: https://chzzk.naver.com/video/3420720
[chzzk:video] 3420720: Downloading video info
director: Handler preferences for this request: urllib=0, requests=100
director: Checking if "requests" supports this request.
director: Sending request via "requests"
requests: Starting new HTTPS connection (1): api.chzzk.naver.com:443
send: b'GET /service/v3/videos/3420720 HTTP/1.1\r\nHost: api.chzzk.naver.com\r\nConnection: keep-alive\r\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.17 Safari/537.36\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language: en-us,en;q=0.5\r\nSec-Fetch-Mode: navigate\r\nAccept-Encoding: gzip, deflate\r\n\r\n'
reply: 'HTTP/1.1 200 \r\n'
header: Date: Fri, 13 Sep 2024 05:05:35 GMT
header: content-type: application/json
header: transfer-encoding: chunked
header: vary: Origin
header: vary: Access-Control-Request-Method
header: vary: Access-Control-Request-Headers
header: x-content-type-options: nosniff
header: x-xss-protection: 0
header: cache-control: no-cache, no-store, max-age=0, must-revalidate
header: pragma: no-cache
header: expires: 0
header: x-frame-options: DENY
header: referrer-policy: unsafe-url
header: server: nfront
[chzzk:video] 3420720: Downloading video playback
director: Handler preferences for this request: urllib=0, requests=100
director: Checking if "requests" supports this request.
director: Sending request via "requests"
requests: Starting new HTTPS connection (1): apis.naver.com:443
send: b'GET /neonplayer/vodplay/v1/playback/None?key=None&env=real&lc=en_US&cpl=en_US HTTP/1.1\r\nHost: apis.naver.com\r\nConnection: keep-alive\r\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.17 Safari/537.36\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language: en-us,en;q=0.5\r\nSec-Fetch-Mode: navigate\r\nAccept-Encoding: gzip, deflate\r\n\r\n'
reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Server: nginx
header: Date: Fri, 13 Sep 2024 05:05:35 GMT
header: Content-Type: application/xhtml+xml
header: Transfer-Encoding: chunked
header: Connection: keep-alive
header: Keep-Alive: timeout=5
header: apigw-uuid: 3751f7dd-9b23-4900-bc4a-0c83c441a252
header: set-cookie: JSESSIONID=C4997C8E5D3CAF364921DD87671C1718; Path=/; HttpOnly
header: server-timing: total;desc="total duration of the request";dur=3.0
header: access-control-allow-credentials: true
header: referrer-policy: unsafe-url
header: apigw-error: 084
ERROR: [chzzk:video] 3420720: Unable to download video playback: HTTP Error 400: Bad Request (caused by <HTTPError 400: Bad Request>)
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/extractor/common.py", line 740, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/extractor/chzzk.py", line 149, in _real_extract
formats, subtitles = self._extract_mpd_formats_and_subtitles(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/extractor/common.py", line 2612, in _extract_mpd_formats_and_subtitles
periods = self._extract_mpd_periods(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/extractor/common.py", line 2622, in _extract_mpd_periods
res = self._download_xml_handle(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/extractor/common.py", line 1099, in download_handle
res = self._download_webpage_handle(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/extractor/common.py", line 960, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/extractor/common.py", line 909, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/extractor/common.py", line 896, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query, extensions))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/YoutubeDL.py", line 4165, in urlopen
return self._request_director.send(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/home/hibot/radiyudown/./yt-dlp/yt_dlp/networking/_requests.py", line 365, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 400: Bad Request
```
| NSFW,site-bug | low | Critical |
2,523,912,567 | ollama | Isn't it time to move onto Omni models? | There is a model that I found today called
*LLaMa 3.1 8b Omni* and it is a speech to speech model with very low latency ensuring the best experience for local models.
But as for Ollama it doesn't support such models, although VLMs are there but having these Omni models on your local device is just 🤌🏻 too Good to be true.
HF reference for llama 3.1 8b Omni:
https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni
Looking forward to @ollama team's implementation
Thanks🙏🏻 | feature request | low | Minor |
2,523,921,455 | vscode | Menu Bar should become compact when not enough space | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I would like to turn off the menu bar folding feature in ver 1.7.0.
Visual Studio Code July 2022
https://code.visualstudio.com/updates/v1_70#_easier-title-bar-customization
When using a custom title bar, this [menu bar folding] feature causes the menu to collapse depending on what is displayed in the title bar.
Since the menu is collapsed when the Window is resized, I do not want to have to click the mouse twice or three times to push a certain menu item, depending on the subtle difference in Window size.
In my environment, the menu collapses even though the title is displayed in the center of the custom title bar and a blank space still exists on the right side.
It would be nice to have the option to switch to the hamburger menu button or not when the custom title bar menu is displayed.
Also, the menu bar is collapsed with a three-point leader, but the collapsing behavior is undesirable because there is a margin to the right of the title bar, and we would like to see this behavior improved.
It would be nice to have an option to right-align the title bar center, Window: Title, and command center.
* I am not good at English, so sorry if I don't get the message. I am using machine translation.
| feature-request,titlebar,menus | low | Minor |
2,523,940,488 | ollama | Support googles new "DataGemma" model | Google just released DataGemma, a new model that is based on real world data:
https://blog.google/technology/ai/google-datagemma-ai-llm/
https://huggingface.co/collections/google/datagemma-release-66df7636084d2b150a4e6643 | model request | low | Minor |
2,523,953,527 | bitcoin | Intermittent failure in tool_wallet.py in self.assert_tool_output('', '-wallet=salvage', 'salvage') : assert_equal(p.poll(), 0) ; AssertionError: not(3221226505 == 0) | ERROR: type should be string, got "https://github.com/maflcko/bitcoin-core-with-ci/actions/runs/10833872949/job/30079686125#step:12:1068\r\n\r\n\r\n```\r\n test 2024-09-12T23:30:54.661000Z TestFramework (INFO): Check salvage \r\n\r\n...\r\n\r\n node0 2024-09-12T23:30:55.537324Z [shutoff] [D:\\a\\bitcoin-core-with-ci\\bitcoin-core-with-ci\\src\\wallet\\bdb.cpp:608] [Flush] [walletdb] BerkeleyEnvironment::Flush: [D:\\a\\_temp\\test_runner_₿_🏃_20240912_231657\\tool_wallet_222\\node0\\regtest\\wallets\\salvage] Flush(false) database not started \r\n node0 2024-09-12T23:30:55.537367Z [shutoff] [D:\\a\\bitcoin-core-with-ci\\bitcoin-core-with-ci\\src\\wallet/wallet.h:932] [WalletLogPrintf] [default wallet] Releasing wallet .. \r\n node0 2024-09-12T23:30:55.537380Z [shutoff] [D:\\a\\bitcoin-core-with-ci\\bitcoin-core-with-ci\\src\\wallet\\bdb.cpp:608] [Flush] [walletdb] BerkeleyEnvironment::Flush: [D:\\a\\_temp\\test_runner_₿_🏃_20240912_231657\\tool_wallet_222\\node0\\regtest\\wallets] Flush(false) database not started \r\n node0 2024-09-12T23:30:55.538987Z [shutoff] [D:\\a\\bitcoin-core-with-ci\\bitcoin-core-with-ci\\src\\init.cpp:389] [Shutdown] Shutdown: done \r\n test 2024-09-12T23:30:55.592000Z TestFramework.node0 (DEBUG): Node stopped \r\n test 2024-09-12T23:30:56.734000Z TestFramework (ERROR): Assertion failed \r\n Traceback (most recent call last):\r\n File \"D:\\a\\bitcoin-core-with-ci\\bitcoin-core-with-ci\\build\\test\\functional\\test_framework\\test_framework.py\", line 132, in main\r\n self.run_test()\r\n File \"D:\\a\\bitcoin-core-with-ci\\bitcoin-core-with-ci\\build\\test\\functional\\tool_wallet.py\", line 558, in run_test\r\n self.test_salvage()\r\n File \"D:\\a\\bitcoin-core-with-ci\\bitcoin-core-with-ci\\build\\test\\functional\\tool_wallet.py\", line 326, in test_salvage\r\n self.assert_tool_output('', '-wallet=salvage', 'salvage')\r\n File \"D:\\a\\bitcoin-core-with-ci\\bitcoin-core-with-ci\\build\\test\\functional\\tool_wallet.py\", line 65, in assert_tool_output\r\n assert_equal(p.poll(), 0)\r\n File \"D:\\a\\bitcoin-core-with-ci\\bitcoin-core-with-ci\\build\\test\\functional\\test_framework\\util.py\", line 77, in assert_equal\r\n raise AssertionError(\"not(%s)\" % \" == \".join(str(arg) for arg in (thing1, thing2) + args))\r\n AssertionError: not(3221226505 == 0)" | Wallet,Windows,CI failed | low | Critical |
2,523,972,381 | angular | i18n extracts messages of own componentlib node module but isn't translating them | ### Which @angular/* package(s) are the source of the bug?
Don't known / other
### Is this a regression?
No
### Description
I have a main app with i18n support and this app uses components of a componentlib which I have created myself. The componentlib is packaged with ng-packagr and is installed as a node module in the main app. The problem is that messages of the componentlib components get extracted just like messages from the main app to messages.xlf (german) and messages.en.xlf but always the german version is displayed for the componentlib components eventhough for the main app the german/english version is displayed depending on the version i start. I expected that the componentlib messages get translated just like the main app messages since the extract is working just fine.
## Example component in componentlib
`<p-button i18n-label="@@pl.fileUploadLabel" i18n class="mt-4 mb-0" label="Noch ein Label">Noch ein Text</p-button>`
## Translation in main-app
`<trans-unit id="pl.fileUploadLabel" datatype="html">
<source>Noch ein Label</source>
<target state="new">Another Label</target>
</trans-unit>
<trans-unit id="116285064937012435" datatype="html">
<source>Noch ein Text</source>
<target state="new">Another Text</target>
</trans-unit>`
## Usage in main-app
`import { ExampleComponent } from '@pl/pattern-library';
\@Component({
selector: 'example-dialog',
standalone: true,
templateUrl: './example-dialog.component.html',
styleUrl: './example-dialog.component.scss',
changeDetection: ChangeDetectionStrategy.OnPush,
imports: [
ExampleComponent,
// ... Other components are not relevant for this example
],
})`
### in .html
`<pl-example-component></pl-example-component>`
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
The componentlib components are always displayed in the german version. There is no other error displayed.
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
## main-app project.json (both projects are setup with nx)
`{
"name": "main-app",
"$schema": "node_modules/nx/schemas/project-schema.json",
"projectType": "application",
"sourceRoot": "src",
"prefix": "main-app",
"i18n": {
"sourceLocale": "de",
"locales": {
"en": "src/locale/messages.en.xlf"
}
},
"generators": {
"@schematics/angular:component": {
"style": "scss"
}
},
"targets": {
"build": {
"executor": "@angular-devkit/build-angular:application",
"options": {
"outputPath": "dist/main-app",
"index": "src/index.html",
"browser": "src/main.ts",
"polyfills": [
"zone.js"
],
"tsConfig": "tsconfig.app.json",
"inlineStyleLanguage": "scss",
"assets": [
"src/favicon.ico",
"src/assets"
],
"styles": [
"src/styles.scss"
],
"scripts": [],
"localize": true,
"i18nMissingTranslation": "warning"
},
"configurations": {
"production": {
"fileReplacements": [
{
"replace": "libs/shared/environment/src/lib/environments/environment.ts",
"with": "libs/shared/environment/src/lib/environments/environment.prod.ts"
}
],
"outputHashing": "all"
},
"development": {
"optimization": false,
"extractLicenses": false,
"sourceMap": true
},
"de": {
"localize": [
"de"
]
},
"en": {
"localize": [
"en"
]
}
},
"defaultConfiguration": "production"
},
"serve": {
"executor": "@angular-devkit/build-angular:dev-server",
"configurations": {
"production": {
"buildTarget": "main-app:build:production",
"fileReplacements": [
{
"replace": "libs/shared/environment/src/lib/environments/environment.ts",
"with": "libs/shared/environment/src/lib/environments/environment.prod.ts"
}
]
},
"development-de": {
"buildTarget": "main-app:build:development,de"
},
"development-en": {
"buildTarget": "main-app:build:development,en"
}
},
"defaultConfiguration": "development"
},
"extract-i18n": {
"executor": "ng-extract-i18n-merge:ng-extract-i18n-merge",
"options": {
"buildTarget": "main-app:build",
"format": "xlf",
"outputPath": "src/locale",
"targetFiles": [
"messages.en.xlf"
]
}
}
}`
## componentlib project.json (both projects are setup with nx)
`{
"name": "pattern-library",
"$schema": "../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "libs/pattern-library/src",
"prefix": "pl",
"projectType": "library",
"tags": [],
"targets": {
"build": {
"executor": "@nx/angular:package",
"outputs": [
"{workspaceRoot}/dist/{projectRoot}"
],
"options": {
"project": "libs/pattern-library/ng-package.json"
},
"configurations": {
"production": {
"tsConfig": "libs/pattern-library/tsconfig.lib.prod.json"
},
"development": {
"tsConfig": "libs/pattern-library/tsconfig.lib.json"
}
},
"defaultConfiguration": "production"
},
"test": {
"executor": "@nx/jest:jest",
"outputs": [
"{workspaceRoot}/coverage/{projectRoot}"
],
"options": {
"jestConfig": "libs/pattern-library/jest.config.ts"
}
},
"lint": {
"executor": "@nx/eslint:lint",
"outputs": [
"{options.outputFile}"
],
"options": {
"lintFilePatterns": [
"libs/pattern-library/**/*.ts",
"libs/pattern-library/**/*.html",
"libs/pattern-library/package.json"
]
}
}
}
}
"executor": "@nx/eslint:lint",
"outputs": [
"{options.outputFile}"
],
"options": {
"lintFilePatterns": [
"libs/pattern-library/**/*.ts",
"libs/pattern-library/**/*.html",
"libs/pattern-library/package.json"
]
}
}
}
}
````
### Anything else?
_No response_ | area: i18n | low | Critical |
2,523,987,049 | godot | `Input.is_mouse_button_pressed` stops updating when focus is lost to another window and does not recover when focus returns | ### Tested versions
Reproducible in 4.3-stable for Windows. "Godot Engine v4.3.stable.official.77dcf97d8"
### System information
Windows 10 - Godot v4.3.stable - Compatibility (OpenGL API 3.3.0 NVIDIA 560.81 ) - Using Device: NVIDIA - NVIDIA GeForce RTX 3070
### Issue description
When using `Input.is_mouse_button_pressed` to detect mouse clicks, the state does not update correctly if the user releases the mouse button after game window loses focus. Specifically, if the mouse button is pressed and the window loses focus (e.g., via ALT+TAB), and the button is then released while another window has focus, `Input.is_mouse_button_pressed` continues to return true **even after focus is returned** to the game window.
The expected behavior is for `Input.is_mouse_button_pressed` to return false once the mouse button is released, regardless of the window's focus. If the intended behavior is to avoid updating `is_mouse_button_pressed` while the window is not focused, it should at least **update correctly when the window regains focus**. However, I believe that `Input.is_mouse_button_pressed` should accurately reflect the state of the mouse button press, regardless of the game window's focus state.
Recording of the bug reproduction in the video:
https://github.com/user-attachments/assets/2f37aa4e-fe6e-4363-8126-fcc9207b5b73
### Steps to reproduce
1. **Create a new Godot project** or open an existing one.
2. **Set up a scene** that relies on `Input.is_mouse_button_pressed` to detect mouse clicks.
- Eg.: Add a Label node and attach the following script to it:
```gdscript
extends Label
func _process(_delta: float) -> void:
text = str("is_mouse_button_pressed: ", Input.is_mouse_button_pressed(MOUSE_BUTTON_LEFT))
print("is_mouse_button_pressed: ", Input.is_mouse_button_pressed(MOUSE_BUTTON_LEFT))
```
3. **Prepare another window** (such as a maximized "Windows Explorer" window) capable of overlapping your game window.
4. **Run the project** in the Godot editor or export it as a standalone application.
5. **Click** with the left mouse button within the game window.
6. **Keep** the left mouse button pressed and observe the label printing `true`.
7. While still holding down the left mouse button, **press `ALT+TAB`** to switch focus to another window (e.g., a browser or file explorer).
8. Observe the console, if possible, still printing `true`. You can also use Windows window preview feature to check the game window label without giving it focus.
9. **Release the left mouse button** while the focus is on the other window.
10. Observe that the label continues printing `true`. Here, we would expect the label to print `false` since the mouse button was released.
11. **Press `ALT+TAB`** again to return focus to the game window **without clicking**.
12. Observe the label still printing `true`. Even after regaining focus, Godot does not update the `is_mouse_button_pressed` state.
13. **Click** again within the game window. Now, `is_mouse_button_pressed` will be properly updated while the game is in focus.
### Minimal reproduction project (MRP)
[is_mouse_button_pressed_bug.zip](https://github.com/user-attachments/files/16989538/is_mouse_button_pressed_bug.zip)
| bug,platform:windows,needs testing,topic:input | low | Critical |
2,523,992,236 | vscode | SCM Graph - Make 'Auto' mode select multiple remotes |
Type: <b>Feature Request</b>
As a contributor to VS Code I have this setup for my git remotes:
```
PS C:\Users\JohnM\Documents\GitHub\gjsjohnmurray\vscode> git remote -v
origin https://github.com/gjsjohnmurray/vscode.git (fetch)
origin https://github.com/gjsjohnmurray/vscode.git (push)
upstream https://github.com/microsoft/vscode.git (fetch)
upstream https://github.com/microsoft/vscode.git (push)
```
I would like 'Auto' mode of the SCM Graph to show both 'origin' and 'upstream'. Currently it only shows 'origin', so I have to make the selection manually. This will become more bearable once the graph persists its selection (see another issue), but I still think it'd be better for 'Auto' to do this automatically.
VS Code version: Code - Insiders 1.94.0-insider (102ff8db3f8dd54027407279ed5cb78e81b4bf19, 2024-09-12T08:48:43.150Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<!-- generated by issue reporter --> | feature-request,scm | low | Minor |
2,524,028,335 | go | runtime: panic not generating a correct backtrace stack while crashing in cgo on ARM64 platform | ### Go version
go version go1.22.6 linux/amd64 (cross compile to arm64 with CGO_ENABLED=0)
### Output of `go env` in your module/workspace:
```shell
// cross compile
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/opt/xxx/.cache/go-build'
GOENV='/opt/xxx/.config/go/env'
GOEXE=''
GOEXPERIMENT='arenas'
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/opt/xxx/gopath/pkg/mod'
GONOPROXY='xxx'
GONOSUMDB='xxx'
GOOS='linux'
GOPATH='/opt/xxx/gopath'
GOPRIVATE='xxx'
GOPROXY='xxx'
GOROOT='/opt/xxx/go'
GOSUMDB='sum.golang.google.cn'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/opt/xxx/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.22.6'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/opt/xxx/inception/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build200162598=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
I'm writing a Golang+CGO program, and testing a bad code in CGO to check if it could catch the right backtrace.
Here is my code:
`test.c`:
```
#include <stdio.h>
#include <stddef.h>
void test()
{
int* ptr = NULL;
*ptr = 1024;
}
void trigger_crash()
{
printf("hello world\n");
test();
}
```
`test.h`:
```
#ifndef FDDE6B57_4166_4D0B_9BED_C9BF03D209B8
#define FDDE6B57_4166_4D0B_9BED_C9BF03D209B8
void trigger_crash();
#endif /* FDDE6B57_4166_4D0B_9BED_C9BF03D209B8 */
```
`main.go`:
```
package main
/*
#include <test.h>
*/
import "C"
import (
"fmt"
"os"
"os/signal"
"runtime/debug"
"syscall"
)
func enableCore() {
debug.SetTraceback("crash")
var lim syscall.Rlimit
err := syscall.Getrlimit(syscall.RLIMIT_CORE, &lim)
if err != nil {
panic(fmt.Sprintf("error getting rlimit: %v", err))
}
lim.Cur = lim.Max
fmt.Fprintf(os.Stderr, "Setting RLIMIT_CORE = %+#v\n", lim)
err = syscall.Setrlimit(syscall.RLIMIT_CORE, &lim)
if err != nil {
panic(fmt.Sprintf("error setting rlimit: %v", err))
}
signal.Ignore(syscall.SIGABRT)
}
func main() {
enableCore()
C.trigger_crash()
}
```
### What did you see happen?
I cannot get the C stack by gdb on arm64.
```
$ gdb -nx -batch -ex bt cgo-crash /var/core/cgo-crash.1724899484.2105538.core
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
Core was generated by `./cgo-crash'.
Program terminated with signal SIGABRT, Aborted.
#0 runtime.raise () at /opt/xxx/go/src/runtime/sys_linux_arm64.s:158
158 /opt/xxx/go/src/runtime/sys_linux_arm64.s: No such file or directory.
[Current thread is 1 (Thread 0x7f5b7fe1d0 (LWP 2105542))]
warning: Missing auto-load script at offset 0 in section .debug_gdb_scripts
of file /home/admin/cgo-crash.
Use `info auto-load python-scripts [REGEXP]' to list them.
#0 runtime.raise () at /opt/xxx/go/src/runtime/sys_linux_arm64.s:158
#1 0x000000000044e884 in runtime.dieFromSignal (sig=6) at /opt/xxx/go/src/runtime/signal_unix.go:923
#2 0x000000000044ef30 in runtime.sigfwdgo (sig=6, info=<optimized out>, ctx=<optimized out>, ~r0=<optimized out>) at /opt/xxx/go/src/runtime/signal_unix.go:1128
#3 0x000000000044d53c in runtime.sigtrampgo (sig=0, info=0x2020c6, ctx=0x6) at /opt/xxx/go/src/runtime/signal_unix.go:432
#4 0x0000000000469a54 in runtime.sigtramp () at /opt/xxx/go/src/runtime/sys_linux_arm64.s:462
```
Before 1.20 (included), C stack could be typed correctly. After version 1.21, this problem appeared. Related issue: https://github.com/golang/go/issues/63277
https://github.com/golang/go/commit/de5b418bea70aaf27de1f47e9b5813940d1e15a4 fixes the problem on x64. **Unfortunately, arm64 remains the problem.**
@zzkcode gave more context in https://github.com/golang/go/issues/63277#issuecomment-2316767430
### What did you expect to see?
Generating the correct backtrace on arm64 (details will be in the Details section). | help wanted,NeedsInvestigation,arch-arm64,compiler/runtime | low | Critical |
2,524,049,609 | godot | `Window` with flag popup issue. | ### Tested versions
4.x
### System information
windows 10
### Issue description
Window with flag popup have a `time_since_popup`, and if the mouse was pressed outside the window to hide it before 250ms the window will become unfocused instead of hidden and it will remain visible.
https://github.com/user-attachments/assets/2794fb4b-b9c9-49e0-9932-529ce1638766

### Steps to reproduce
Press a button to show any window that has the flag popup and left click the mouse outside the window in less than 250ms.
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,discussion,needs testing,topic:gui | low | Major |
2,524,081,645 | vscode | Make URI Handlers available for vscode.dev | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
We currently want to enable more non-tech people using our extension via [vscode.dev](http://vscode.dev/).
For VS Code desktop we have [URI Handlers](https://code.visualstudio.com/api/references/vscode-api#window.registerUriHandler) that enable us to give a link like `vscode://mypub.myext/openFullscreen` to someone and by clicking the person gets a prompt to install the extension and we can also do something like opening a fullscreen VSC webview (suited for non-tech people).
It would be great if URI Handlers would also be available in vscode.dev. It'd make our use case a lot easier than explaining people how they install VS Code as well as install & open extensions. | feature-request,uri | medium | Major |
2,524,094,560 | node | Deadlock at process shutdown | ### Version
v23.0.0-pre
### Platform
```text
Linux USR-PC2HD4AD 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
There is a deadlock that prevents the Node.js process from exiting. The issue is causing a lot (all?) of timeout failures in our CI. It can be reproduced by running a test in parallel with our `test.py` tool, for example
```
$ ./tools/test.py --repeat=10000 test/parallel/test-http2-large-write-multiple-requests.js
```
See also
- https://github.com/nodejs/node/issues/54133#issuecomment-2343320802
- https://github.com/nodejs/node/issues/52550#issuecomment-2343204697
- https://github.com/nodejs/node/pull/52959#issuecomment-2108711217
### How often does it reproduce? Is there a required condition?
Rarely, but often enough to be a serious issue for CI.
### What is the expected behavior? Why is that the expected behavior?
The process exits.
### What do you see instead?
The process does not exit.
### Additional information
Attaching `gdb` to two of the hanging processes obtained from the command above, produces the following outputs:
```
$ gdb -p 15884
GNU gdb (Ubuntu 15.0.50.20240403-0ubuntu1) 15.0.50.20240403-git
Copyright (C) 2024 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 15884
[New LWP 15953]
[New LWP 15952]
[New LWP 15951]
[New LWP 15950]
[New LWP 15900]
[New LWP 15889]
[New LWP 15888]
[New LWP 15887]
[New LWP 15886]
[New LWP 15885]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Download failed: Invalid argument. Continuing without source file ./nptl/./nptl/futex-internal.c.
0x00007f432025fd61 in __futex_abstimed_wait_common64 (private=21855, cancel=true, abstime=0x0, op=393, expected=0,
futex_word=0x555fa9fce8f0) at ./nptl/futex-internal.c:57
warning: 57 ./nptl/futex-internal.c: No such file or directory
(gdb) bt
#0 0x00007f432025fd61 in __futex_abstimed_wait_common64 (private=21855, cancel=true, abstime=0x0, op=393, expected=0,
futex_word=0x555fa9fce8f0) at ./nptl/futex-internal.c:57
#1 __futex_abstimed_wait_common (cancel=true, private=21855, abstime=0x0, clockid=0, expected=0,
futex_word=0x555fa9fce8f0) at ./nptl/futex-internal.c:87
#2 __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555fa9fce8f0, expected=expected@entry=0,
clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3 0x00007f43202627dd in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555fa9fce870,
cond=0x555fa9fce8c8) at ./nptl/pthread_cond_wait.c:503
#4 ___pthread_cond_wait (cond=0x555fa9fce8c8, mutex=0x555fa9fce870) at ./nptl/pthread_cond_wait.c:627
#5 0x0000555fa3d30d7d in uv_cond_wait (cond=<optimized out>, mutex=<optimized out>)
at ../deps/uv/src/unix/thread.c:814
#6 0x0000555fa2c8c063 in node::NodePlatform::DrainTasks(v8::Isolate*) ()
#7 0x0000555fa2ac528d in node::SpinEventLoopInternal(node::Environment*) ()
#8 0x0000555fa2c4abc4 in node::NodeMainInstance::Run() ()
#9 0x0000555fa2b9233a in node::Start(int, char**) ()
#10 0x00007f43201f11ca in __libc_start_call_main (main=main@entry=0x555fa2ababa0 <main>, argc=argc@entry=2,
argv=argv@entry=0x7ffe74216608) at ../sysdeps/nptl/libc_start_call_main.h:58
#11 0x00007f43201f128b in __libc_start_main_impl (main=0x555fa2ababa0 <main>, argc=2, argv=0x7ffe74216608,
init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffe742165f8)
at ../csu/libc-start.c:360
#12 0x0000555fa2ac06a5 in _start ()
(gdb)
```
```
$ gdb -p 15885
GNU gdb (Ubuntu 15.0.50.20240403-0ubuntu1) 15.0.50.20240403-git
Copyright (C) 2024 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 15885
Reading symbols from /home/luigi/node/out/Release/node...
Reading symbols from /lib/x86_64-linux-gnu/libstdc++.so.6...
Reading symbols from /home/luigi/.cache/debuginfod_client/40b9b0d17fdeebfb57331304da2b7f85e1396ef2/debuginfo...
Reading symbols from /lib/x86_64-linux-gnu/libm.so.6...
Reading symbols from /usr/lib/debug/.build-id/90/32976b3ecc78d1362fedfcd88528562bbfb7e4.debug...
Reading symbols from /lib/x86_64-linux-gnu/libgcc_s.so.1...
Reading symbols from /home/luigi/.cache/debuginfod_client/92123f0e6223c77754bac47062c0b9713ed363df/debuginfo...
Reading symbols from /lib/x86_64-linux-gnu/libc.so.6...
Reading symbols from /usr/lib/debug/.build-id/6d/64b17fbac799e68da7ebd9985ddf9b5cb375e6.debug...
Reading symbols from /lib64/ld-linux-x86-64.so.2...
Reading symbols from /usr/lib/debug/.build-id/35/3e1b6cb0eebc08cf3ff812eae8a51b4efd684e.debug...
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Download failed: Invalid argument. Continuing without source file ./misc/../sysdeps/unix/sysv/linux/epoll_pwait.c.
0x00007f43202f0ee0 in __GI_epoll_pwait (epfd=epfd@entry=9, events=events@entry=0x7f43201bdcb0,
maxevents=maxevents@entry=1024, timeout=timeout@entry=-1, set=set@entry=0x0)
at ../sysdeps/unix/sysv/linux/epoll_pwait.c:40
warning: 40 ../sysdeps/unix/sysv/linux/epoll_pwait.c: No such file or directory
(gdb) bt
#0 0x00007f43202f0ee0 in __GI_epoll_pwait (epfd=epfd@entry=9, events=events@entry=0x7f43201bdcb0,
maxevents=maxevents@entry=1024, timeout=timeout@entry=-1, set=set@entry=0x0)
at ../sysdeps/unix/sysv/linux/epoll_pwait.c:40
#1 0x0000555fa3d364e7 in uv__io_poll (loop=loop@entry=0x555fa9fcea88, timeout=<optimized out>)
at ../deps/uv/src/unix/linux.c:1432
#2 0x0000555fa3d20c17 in uv_run (loop=0x555fa9fcea88, mode=UV_RUN_DEFAULT) at ../deps/uv/src/unix/core.c:448
#3 0x0000555fa2c8d4b1 in node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start()::{lambda(void*)#1}::_FUN(void*)
()
#4 0x00007f4320263a94 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:447
#5 0x00007f43202f0c3c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
(gdb)
``` | confirmed-bug,help wanted,v8 engine | medium | Critical |
2,524,111,651 | angular | Shadow DOM Style Bleeding | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
I have a host component that contains a header component using ViewEncapsulation.None and a mainContainer component using ViewEncapsulation.ShadowDom. The mainContainer component also includes an admin component with ViewEncapsulation.None. The admin component is a composite component that includes several other components, and I don’t want to use ViewEncapsulation.ShadowDom for it.
I’ve observed that the styles from the admin component are being applied to both the Shadow DOM and the root DOM, which affects the styling of the header component
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
I’ve observed that the styles from the admin component are being applied to both the Shadow DOM and the root DOM, which affects the styling of the header component
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 12.2.18
Node: 14.21.3
Package Manager: npm 6.14.18
OS: win32 x64
Angular: 12.2.17
... animations, common, compiler, compiler-cli, core, forms
... language-service, localize, platform-browser
... platform-browser-dynamic, router
Package Version
------------------------------------------------------------
@angular-devkit/architect 0.1202.18
@angular-devkit/build-angular 12.2.18
@angular-devkit/core 12.2.18
@angular-devkit/schematics 12.2.18
@angular/cdk 12.2.13
@angular/cli 12.2.18
@angular/material 12.2.13
@angular/material-moment-adapter 12.2.13
@schematics/angular 12.2.18
rxjs 6.6.7
typescript 4.2.4
```
### Anything else?
How can I prevent the admin component's styles from leaking into the root DOM while keeping its current encapsulation settings? | area: core,core: CSS encapsulation | low | Critical |
2,524,127,898 | vscode | Last character of Korean moves to where cursor is | Type: <b>Bug</b>
When I left-click after typing Korean(in .c, .py, .txt etc),
the last character moves to a line where my mouse cursor is.
Can I get a solution for this?
I hope to be replied fastly. Thank you.
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i5-1235U (12 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.71GB (7.45GB free)|
|Process Argv|--crash-reporter-id 92bab400-597b-498c-8af7-63b11a16d210|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (23)</summary>
Extension|Author (truncated)|Version
---|---|---
TabOut|alb|0.2.2
code-runner|for|0.12.2
vscode-power-mode|hoo|3.0.2
bongocat-buddy|Joh|1.6.0
vsc-python-indent|Kev|1.18.0
vscode-language-pack-ko|MS-|1.93.2024091109
debugpy|ms-|2024.10.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.9.1
jupyter|ms-|2024.8.1
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
cmake-tools|ms-|1.19.51
cpptools|ms-|1.21.6
cpptools-extension-pack|ms-|1.3.0
indent-rainbow|ode|8.3.1
vscode-numpy-viewer|Per|0.1.8
vscode-pets|ton|1.27.0
cmake|twx|0.0.17
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
(2 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
bdiig495:31013172
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
wkspc-ranged-c:31125598
ei213698:31121563
```
</details>
<!-- generated by issue reporter --> | bug,editor-input-IME | low | Critical |
2,524,157,038 | PowerToys | Existing Windows Short-cut keys affected | ### Microsoft PowerToys version
v0.0.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
When Powertoys is running, built-in Windows short-cut keys become unreliable. Examples:
Win key + Number keys launch the wrong application from the taskbar
Win key+e doesn't launch Windows Explorer but some seemingly random other app like Magnifier
Win+m doesn't minimize all windows.
When I close Powertoys, the issue goes away
### ✔️ Expected Behavior
I press Win+2, expecting to start Outlook, which is the second item on my taskbar
### ❌ Actual Behavior
Win+2 launches Explorer, the third item, or it launches Notes, the first item. Which it launches seems haphazard. Most often it launches the expected app, but if it fails it fails repeatedly.
[PowerToysReport_2024-09-13-09-51-15.zip](https://github.com/user-attachments/files/16990482/PowerToysReport_2024-09-13-09-51-15.zip)
### Other Software
_No response_ | Issue-Bug,Needs-Repro,Needs-Triage,Needs-Team-Response | low | Minor |
2,524,166,127 | tauri | [feat] supprot multiple webview in mobile | ### Describe the problem
supprot multiple webview in mobile like browser
### Describe the solution you'd like
let webviewWindwo = new WebviewWindow('xxx', {
url: '/desktop/setting',
title: '设置'
});
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,524,189,396 | storybook | [Bug]: Unable to resolve @storybook/builder-vite 'get-intrinsic' package | ### Describe the bug
### Error Log
```
Sourcemap for "/virtual:/@storybook/builder-vite/setup-addons.js" points to missing source files
✘ [ERROR] Failed to resolve entry for package "get-intrinsic". The package may have incorrect main/module/exports specified in its package.json. [plugin vite:dep-pre-bundle]
../../../../node_modules/side-channel/index.js:3:27:
3 │ var GetIntrinsic = require('get-intrinsic');
╵ ~~~~~~~~~~~~~~~
Sourcemap for "/virtual:/@storybook/builder-vite/vite-app.js" points to missing source files
✘ [ERROR] Failed to resolve entry for package "get-intrinsic". The package may have incorrect main/module/exports specified in its package.json. [plugin vite:dep-pre-bundle]
../../../../node_modules/call-bind/callBound.js:3:27:
3 │ var GetIntrinsic = require('get-intrinsic');
╵ ~~~~~~~~~~~~~~~
✘ [ERROR] Failed to resolve entry for package "get-intrinsic". The package may have incorrect main/module/exports specified in its package.json. [plugin vite:dep-pre-bundle]
../../../../node_modules/call-bind/index.js:4:27:
4 │ var GetIntrinsic = require('get-intrinsic');
╵ ~~~~~~~~~~~~~~~
Unhandled promise rejection: Error: Build failed with 3 errors:
../../../../node_modules/call-bind/callBound.js:3:27: ERROR: [plugin: vite:dep-pre-bundle] Failed to resolve entry for package "get-intrinsic". The package may have incorrect main/module/exports specified in its package.json.
../../../../node_modules/call-bind/index.js:4:27: ERROR: [plugin: vite:dep-pre-bundle] Failed to resolve entry for package "get-intrinsic". The package may have incorrect main/module/exports specified in its package.json.
../../../../node_modules/side-channel/index.js:3:27: ERROR: [plugin: vite:dep-pre-bundle] Failed to resolve entry for package "get-intrinsic". The package may have incorrect main/module/exports specified in its package.json.
at failureErrorWithLog (C:\Users\SUMIT SAHNI\OneDrive\Desktop\Big_Error\ec-ui-library\node_modules\.pnpm\esbuild@0.21.5\node_modules\esbuild\lib\main.js:1472:15)
at C:\Users\SUMIT SAHNI\OneDrive\Desktop\Big_Error\ec-ui-library\node_modules\.pnpm\esbuild@0.21.5\node_modules\esbuild\lib\main.js:945:25
at C:\Users\SUMIT SAHNI\OneDrive\Desktop\Big_Error\ec-ui-library\node_modules\.pnpm\esbuild@0.21.5\node_modules\esbuild\lib\main.js:1353:9
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
errors: [Getter/Setter],
warnings: [Getter/Setter]
}
```
### Reproduction link
https://github.com/Engineers-Cradle/ec-ui-library
### Reproduction steps
1. clone the repository
2. install dependencies using pnpm i
3. run storybook in development mode using pnpm run storybook
### System
```bash
Storybook Environment Info:
System:
OS: Windows 11 10.0.22631
CPU: (8) x64 Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
Binaries:
Node: 20.17.0 - C:\Program Files\nodejs\node.EXE
npm: 9.5.1 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.10.0 - ~\AppData\Local\pnpm\pnpm.EXE <----- active
Browsers:
Edge: Chromium (127.0.2651.74)
npmPackages:
@storybook/addon-essentials: ^8.2.9 => 8.2.9
@storybook/addon-interactions: ^8.2.9 => 8.2.9
@storybook/addon-links: ^8.2.9 => 8.2.9
@storybook/addon-onboarding: ^8.2.9 => 8.2.9
@storybook/blocks: ^8.2.9 => 8.2.9
@storybook/react: ^8.2.9 => 8.2.9
@storybook/react-vite: ^8.2.9 => 8.2.9
@storybook/test: ^8.2.9 => 8.2.9
eslint-plugin-storybook: ^0.8.0 => 0.8.0
storybook: ^8.2.9 => 8.2.9
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,524,211,611 | flutter | [macOS] Assertion failure and crash when using the MOS app for smooth scrolling | ### Steps to reproduce
1) Download the app here: https://mos.caldis.me (https://github.com/Caldis/Mos/blob/master/README.enUS.md)
2) Install it and enable the smooth scrolling feature
3) Scroll up or down in any ListView
Is there a way to disable NSAssert in release builds without having to rebuild the Flutter Engine myself?
### Expected results
ListViews should scroll normally
### Actual results
```
Assertion failure in -[FlutterViewController dispatchMouseEvent:phase:], FlutterViewController.mm:697
Received gesture event with unexpected phase
```
It can be found here: https://github.com/flutter/engine/blob/c9b9d5780da342eb3f0f5e439a7db06f7d112575/shell/platform/darwin/macos/framework/Source/FlutterViewController.mm#L697
### Code sample
No sample required
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel stable, 3.24.2, on macOS 14.6.1 23G93 darwin-arm64, locale ru-RU)
• Flutter version 3.24.2 on channel stable at /Users/kbukaev/Flutter
! Warning: `flutter` on your path resolves to /Users/kbukaev/flutter/bin/flutter, which is not inside your current Flutter SDK checkout at /Users/kbukaev/Flutter. Consider adding
/Users/kbukaev/Flutter/bin to the front of your path.
! Warning: `dart` on your path resolves to /Users/kbukaev/flutter/bin/dart, which is not inside your current Flutter SDK checkout at /Users/kbukaev/Flutter. Consider adding /Users/kbukaev/Flutter/bin to
the front of your path.
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4cf269e36d (9 days ago), 2024-09-03 14:30:00 -0700
• Engine revision a6bd3f1de1
• Dart version 3.5.2
• DevTools version 2.37.2
• If those were intentional, you can disregard the above warnings; however it is recommended to use “git” directly to perform update checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
• Android SDK at /Users/kbukaev/Library/Android/sdk
• Platform android-34, build-tools 33.0.2
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
:hammer: https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
:hammer: https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.93.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.120
! Error: Browsing on the local area network for iPad (Kirill). Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| engine,platform-mac,a: error message,a: mouse,P3,team-macos,triaged-macos | low | Critical |
2,524,228,348 | node | Error Compiling Node.js v20.2.0 for Android (x86_64) using android-ndk-r25c | ### Version
v20.2.0
### Platform
```text
Docker From Ubuntu:20.04
Linux fc67e7bf1d76 5.15.0-101-generic #111~20.04.1-Ubuntu SMP Mon Mar 11 15:44:43 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
./android-configure /workspace/android-ndk-r25c 24 x86_64
make
### How often does it reproduce? Is there a required condition?
always
### What is the expected behavior? Why is that the expected behavior?
all building steps works fine,and generate node binaray at out/Release
### What do you see instead?
```
/workspace/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android24-clang -o /workspace/node/out/Release/obj.host/genccode/deps/icu-small/source/tools/genccode/genccode.o ../deps/icu-small/source/tools/genccode/genccode.c '-DV8_DEPRECATION_WARNINGS' '-DV8_IMMINENT_DEPRECATION_WARNINGS' '-D_GLIBCXX_USE_CXX11_ABI=1' '-DNODE_OPENSSL_CONF_NAME=nodejs_conf' '-DNODE_OPENSSL_HAS_QUIC' '-DICU_NO_USER_DATA_OVERRIDE' '-D__STDC_FORMAT_MACROS' '-DOPENSSL_NO_PINSHARED' '-DOPENSSL_THREADS' '-DOPENSSL_NO_ASM' '-DUCONFIG_NO_SERVICE=1' '-DU_ENABLE_DYLOAD=0' '-DU_STATIC_IMPLEMENTATION=1' '-DU_HAVE_STD_STRING=1' '-DUCONFIG_NO_BREAK_ITERATION=0' -I../deps/icu-small/source/common -I../deps/icu-small/source/i18n -I../deps/icu-small/source/tools/toolutil -Wall -Wextra -Wno-unused-parameter -m64 -pthread -O3 -fno-omit-frame-pointer -fPIC -MMD -MF /workspace/node/out/Release/.deps//workspace/node/out/Release/obj.host/genccode/deps/icu-small/source/tools/genccode/genccode.o.d.raw -c
/workspace/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android24-clang++ -rdynamic -m64 -pthread -fPIC -o /workspace/node/out/Release/genccode -Wl,--start-group /workspace/node/out/Release/obj.host/genccode/deps/icu-small/source/tools/genccode/genccode.o /workspace/node/out/Release/obj.host/genccode/tools/icu/no-op.o /workspace/node/out/Release/obj.host/tools/icu/libicutools.a -Wl,--end-group
LD_LIBRARY_PATH=/workspace/node/out/Release/lib.host:/workspace/node/out/Release/lib.target:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; cd ../tools/icu; mkdir -p /workspace/node/out/Release/obj/gen; "/workspace/node/out/Release/icupkg" -tl ../../deps/icu-tmp/icudt73l.dat "/workspace/node/out/Release/obj/gen/icudt73l.dat"
/bin/sh: 1: /workspace/node/out/Release/icupkg: not found
make[1]: *** [tools/icu/icudata.target.mk:13: /workspace/node/out/Release/obj/gen/icudt73l.dat] Error 127
make: *** [Makefile:134: node] Error 2
```
### Additional information

| build,android | low | Critical |
2,524,234,306 | pytorch | Add AT_DISPATCH for layernorm gamma and beta | ### 🚀 The feature, motivation and pitch
I recently met this when I try to do layernorm to a float16 tensor. But it got Runtime error. This is a simple reproducer:
```
import torch
from torch.nn import LayerNorm
inputs = torch.rand([1, 1024, 4096], dtype=torch.float16).cuda()
ln = LayerNorm(4096).cuda()
outputs = ln(inputs)
# RuntimeError: expected scalar type Half but found Float
```
I was wondering where float appears and I found that layernorm weight/bias are pair of learnable parameter, defaultly initialized with float. This mismatch on dtype goes all the way until the final implement that doesn't support it.
### Alternatives
I know that now add a `LayerNorm(4096, dtype=torch.float16).cuda()` or `LayerNorm(4096, dtype=torch.float16).cuda().half()` could be alright. But after checking the cuda code https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/layer_norm_kernel.cu#L813, I found it easy to solve this vulnerability by just add another AT_DISPATCH for gamma/bias data type and add their type in template. With this, layernorm could be more friendly on dtypes.
Does it make any sense to you? Or should we just follow the 'rule' of making everything same type first? This could be not only layernorm issue but related to some style of torch. Just provide some thoughts:)
### Additional context
_No response_ | triaged,module: half | low | Critical |
2,524,259,136 | PowerToys | Enabling Advance Paste breaks pin clipboard | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Advanced Paste
### Steps to reproduce
1. Enable Advance Paste
2. Press Win+V, this opens windows clipboard but **pin** will be missing.
### ✔️ Expected Behavior
pin clipboard option must be there
### ❌ Actual Behavior
pin clipboard option doesn't show up
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,524,270,240 | vscode | new version of vscode (linux) performance drop (non smooth) | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Type: **Bug**
|Vendor| Render | Performance |
|---------------|---------------|---------------|
|Intel HD 620|OpenGL|non smooth|
|Intel HD 620|Vulkan|very smooth|
|Intel HD 620|Zink|smooth|
|AMD 530|OpenGL|non smooth (but smooth than Intel HD 620)|
|AMD 530|Vulkan|smooth|
|AMD 530|Zink|super smooth (best for me)|
Does this issue occur when all extensions are disabled?: No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.1 38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40 x64
- OS Version: Linux debian 6.1.0-25-amd64 #'1 SMP PREEMPT_DYNAMIC Debian 6.1.106-3 (2024-08-26) x86_64 GNU/Linux
### System Info
| Item| Value |
|---------------|---------------|
|CPUs| Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz (4 x 2299)|
|Memory (System)| 3.61GB (0.69GB free)|
|Load (avg)| 2, 2, 2|
|VM| 0%|
|Screen Reader| no|
|Process Argv| --enable-features=Vulkan --crash-reporter-id baab1281-4838-4ede-a1e9-50a722e9d252|
|GPU Status| 2d_canvas: enabled <br> canvas_oop_rasterization: enabled_on <br>direct_rendering_display_compositor: disabled_off_ok <br>gpu_compositing: enabled <br>multiple_raster_threads: enabled_on <br>opengl: enabled_on <br>rasterization: enabled <br>raw_draw: disabled_off_ok <br>skia_graphite: disabled_off <br>video_decode: enabled <br>video_encode: disabled_software <br>vulkan: enabled_on <br>webgl: enabled <br>webgl2: enabled <br>webgpu: disabled_off <br>webnn: disabled_off|
| info-needed | low | Critical |
2,524,280,978 | rust | Permit trait object types where all (non-generic) associated constants are specified (via assoc item bindings) | > **Important context**: Inherently, this would only become legal under the ongoing lang experiment `associated_const_equality` (#92827).
We permit trait object types where all 'active'[^1] non-generic associated *types* are specified (via assoc item bindings). We should extend this to cover 'active' non-generic[^2] associated *constants*, too.
Note that I haven't spent much time thinking about *soundness* yet. I still need to iron out the exact rules. Implementation-wise, I'm almost certain that any advances are blocked by #120905 (more precisely, its underlying issue) which I presume we would need to fix first for correctness.
Minimal & contrived example of something that would start compiling:
```rs
#![feature(associated_const_equality)]
trait Trait {
const K: ();
}
fn main() {
let _: dyn Trait<K = { () }>;
}
```
Presently, this gets rejected and we emit E0038 (*cannot be made into an object*).
---
Lastly, lest I forget, we should emit the lint `unused_associated_type_bounds`[^3] (#112319) for assoc const bindings where the corresp. assoc const is 'disabled' via `where Self: Sized` which is only possible to write under `generic_const_items` (#113521).
---
[^1]: I.e., not 'made inactive' / 'disabled' via `where Self: Sized` (#112319).
[^2]: This might or might not be a temporary restriction. For context, we don't (yet) permit GAT bindings in trait object types either on stable, due to soundness concerns. See `generic_associated_types_extended` (#95451). Also note that generic assoc consts (GACs) are only available under `generic_const_items` (#113521).
[^3]: Indeed, the name would no longer be accurate. Ideally, we would rename the lint when generalizing it. | T-compiler,C-feature-request,S-blocked,requires-nightly,T-types,F-associated_const_equality,A-trait-objects,A-dyn-compatibility | low | Minor |
2,524,289,361 | vscode | Editor scrolls horizontally with wrapping enabled | Type: <b>Bug</b>
When wrapping is enabled, the editor scrolls horizontally anyways as it seems to create padding to compensate for the scrollbar. Personally, I enable wrapping to not have a horizontal scrollbar.
With the minimap enabled, this issue becomes even more apparent as the file contents can go underneath the minimap. The vertical scrollbar and minimap creating a horizontal scrollbar should not happen.
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.22635
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i5-13600K (20 x 3494)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.78GB (0.55GB free)|
|Process Argv|--crash-reporter-id 3bc3458f-79a6-4a53-8489-c57e503ec37c|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (28)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-intelephense-client|bme|1.12.6
vscode-tailwindcss|bra|0.12.10
laravel-blade|cjh|1.1.2
js-codeformer|cms|2.6.1
jsrefactor|cms|3.0.1
vscode-eslint|dba|3.0.10
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
vscode-github-actions|git|0.26.5
vscode-pull-request-github|Git|0.96.0
elixir-ls|Jak|0.23.1
mediawiki|jas|0.0.4
rainbow-csv|mec|3.12.0
template-string-converter|meg|0.6.1
kamailio-syntax|mic|1.0.8
debugpy|ms-|2024.10.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.9.1
vscode-react-native|msj|1.13.0
material-icon-theme|PKi|5.10.0
prisma|Pri|5.19.1
vscode-xml|red|0.27.1
vscode-blade-formatter|shu|0.24.2
vscode-scss-formatter|sib|3.0.0
vscode-stylelint|sty|1.4.0
vscode-mdx|uni|1.8.10
explorer|vit|1.2.8
pretty-ts-errors|Yoa|0.6.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
bdiig495:31013172
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
wkspc-ranged-t:31125599
ei213698:31121563
```
</details>
<!-- generated by issue reporter --> | bug | low | Critical |
2,524,315,124 | vscode | Crashpad ignores exception from experimentalWatcherNext | - on Windows use Insiders (or run out of sources from main)
- configure the setting: `files.experimentalWatcherNext` to `true`
- make sure `typescript.tsserver.experimental.useVsCodeWatcher` is checked in settings
- this wil trigger restart
- open vscode repo
- open a TS file (such as strings.ts) to trigger TS language services (this is important because it installs file watchers)
- from the branch picker or terminal do some back and forth between main and release/1.80 branch
See the output log `main` that filewatcher process exited with exception but no `dmp` files are generated in crashpad folder. In some cases, a WER dialog shows up indicating crashpad has ignored this exception. | bug,upstream,freeze-slow-crash-leak,file-watcher,chromium | low | Critical |
2,524,410,174 | ollama | Occasionally getting a 500 response and 'ollama._types.ResponseError: health resp' seemingly out of nowhere | ### What is the issue?
Hello, I am running a Python server that receives and sends requests to an instance of Ollama (with the Llama 3.1 model).
When lots of requests are sent at once, I occasionally receive a 500 response from the Ollama server which causes the process to crash. The error I get from the Python Ollama module is as follows:
```Traceback (most recent call last):
File "ollama\_client.py", line 407, in generate
File "ollama\_client.py", line 378, in _request_stream
File "ollama\_client.py", line 348, in _request
ollama._types.ResponseError: health resp: Get "http://127.0.0.1:61519/health": dial tcp 127.0.0.1:61519: connectex: Only one usage of each socket address (protocol/network address/port) is normally permitted.
```
I am not trying to do anything else with Ollama whilst requests to generate are being sent.
Is there something in Ollama that is automatically attempting to bind this port? Can I somehow just disable this '/health' endpoint?
Thanks in advance.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.10 | bug | low | Critical |
2,524,442,396 | next.js | Set cookie doesn't work in Router handler nor server action | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.8.1
npm: 10.1.0
Yarn: 1.22.22
pnpm: 8.9.2
Relevant Packages:
next: 14.2.2 // There is a newer version (14.2.11) available, upgrade recommended!
eslint-config-next: 14.2.2
react: 18.2.0
react-dom: 18.2.0
typescript: 5.2.2
Next.js Config:
output: N/A
⚠ There is a newer version (14.2.11) available, upgrade recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which example does this report relate to?
none
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
_No response_
### Describe the Bug
I am trying to set up an `auth` page and hit some errors while trying to set cookies from the server side, despite setting them in Server Action and Router Handler as the documentation stated. The idea is when the user visit the auth page, for example `http://localhost:3010/auth?redirectTo=/en&tokenCsrf=1234&tokenRefresh=1234&tokenAccess=1234`, the mock cookies are set to the browser during the SSR and redirect the user to the `redirectTo` page.
#### Here is my attempt to use Router Handler
```
// app/auth.tsx
import { AuthPageComponentParams } from '@/components/AuthPage';
import { BasePageProps } from '@/types/page';
dimport { redirect } from 'next/navigation';
type AuthPageParams = {
searchParams: {
tokenAccess?: string;
tokenRefresh?: string;
tokenCsrf?: string;
redirectTo?: string;
};
};
export default async function AuthPage({ searchParams }: AuthPageParams) {
console.log('AuthPage', searchParams);
await fetch('http://localhost:3010/api/auth/mock', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
credentials: 'include'
},
body: JSON.stringify(searchParams)
});
redirect(searchParams.redirectTo || '/');
}
```
```
// app/api/auth/mock
import { CONFIG } from '@/config/config';
import { HTTP_STATUS } from '@/constants/http_status';
import { MOCK_AUTH_COOKIES } from '@helper/constants';
import { ResponseCookie } from 'next/dist/compiled/@edge-runtime/cookies';
import { cookies } from 'next/headers';
import { NextRequest, NextResponse } from 'next/server';
export type AuthCookies = {
tokenAccess?: string;
tokenRefresh?: string;
tokenCsrf?: string;
};
export const mockAuthCookieConfig: Cookies.CookieAttributes = {
domain: `.localhost`,
path: '/',
secure: true,
sameSite: 'lax' // NOTE: To avoid URL for breaking when clicked from an email.
};
type Body = AuthCookies;
export async function POST(request: NextRequest) {
const { tokenAccess, tokenRefresh, tokenCsrf }: Body = await request.json();
try {
const cookieStore = cookies();
if (!CONFIG.auth.mock && CONFIG.environment !== 'development_local' && tokenAccess && tokenRefresh && tokenCsrf) {
return;
}
if (tokenAccess && tokenRefresh && tokenCsrf) {
console.log('Setting mock auth cookies in the browser...');
const oneMonth = 24 * 60 * 60 * 30;
const cookiePairs = {
[MOCK_AUTH_COOKIES.MOCK_CSRF_TOKEN]: tokenCsrf,
[MOCK_AUTH_COOKIES.MOCK_REFRESH_TOKEN]: tokenRefresh,
[MOCK_AUTH_COOKIES.MOCK_ACCESS_TOKEN]: tokenAccess
};
Object.entries(cookiePairs).forEach(([k, v]) => {
cookieStore.set(k, v, {
...mockAuthCookieConfig,
expires: oneMonth
} as ResponseCookie);
});
console.log('Mock auth cookies set successfully', {
[MOCK_AUTH_COOKIES.MOCK_CSRF_TOKEN]: cookieStore.get(MOCK_AUTH_COOKIES.MOCK_CSRF_TOKEN),
[MOCK_AUTH_COOKIES.MOCK_REFRESH_TOKEN]: cookieStore.get(MOCK_AUTH_COOKIES.MOCK_REFRESH_TOKEN),
[MOCK_AUTH_COOKIES.MOCK_ACCESS_TOKEN]: cookieStore.get(MOCK_AUTH_COOKIES.MOCK_ACCESS_TOKEN)
});
}
} catch (e) {
console.error(e);
}
return NextResponse.json({ ok: true, status: HTTP_STATUS.SUCCESS });
}
```
The endpoint is called properly, in the server log, I see
```
Setting mock auth cookies in the browser...
[0] Mock auth cookies set successfully {
[0] MOCK_CSRF_TOKEN: {
[0] name: 'MOCK_CSRF_TOKEN',
[0] value: '1234',
[0] domain: '.localhost',
[0] path: '/',
[0] secure: true,
[0] sameSite: 'lax',
[0] expires: 1970-01-01T00:43:12.000Z
[0] },
[0] MOCK_REFRESH_TOKEN: {
[0] name: 'MOCK_REFRESH_TOKEN',
[0] value: '1234',
[0] domain: '.localhost',
[0] path: '/',
[0] secure: true,
[0] sameSite: 'lax',
[0] expires: 1970-01-01T00:43:12.000Z
[0] },
[0] MOCK_ACCESS_TOKEN: {
[0] name: 'MOCK_ACCESS_TOKEN',
[0] value: '1234',
[0] domain: '.localhost',
[0] path: '/',
[0] secure: true,
[0] sameSite: 'lax',
[0] expires: 1970-01-01T00:43:12.000Z
[0] }
[0] }
```
but in the browser, nothing is set.
#### Attempt with Server action
```
// actions/auth.ts
'use server';
import { cookies } from 'next/headers';
import { AUTH_COOKIES, MOCK_AUTH_COOKIES } from '@helper/constants';
import { CONFIG } from '@/config/config';
import { getCsrfTokenFromSignedCookie } from '@/utilities/auth/auth';
import { ResponseCookie } from 'next/dist/compiled/@edge-runtime/cookies';
export type AuthCookies = {
tokenAccess?: string;
tokenRefresh?: string;
tokenCsrf?: string;
};
const mockAuthCookieConfig: Cookies.CookieAttributes = {
domain: `.localhost`,
path: '/',
secure: true,
sameSite: 'lax' // NOTE: To avoid URL for breaking when clicked from an email.
};
/*
* This function sets the mock auth cookies in the browser.
* It should only be used for development_local environment.
* */
export async function setMockAuthCookiesIfApplicable({ tokenAccess, tokenRefresh, tokenCsrf }: AuthCookies) {
const cookieStore = cookies()
if (!CONFIG.auth.mock || CONFIG.environment !== 'development_local') {
return;
}
if (CONFIG.auth.mock && tokenAccess && tokenRefresh && tokenCsrf) {
console.log('Setting mock auth cookies in the browser');
const oneMonth = 24 * 60 * 60 * 30;
const cookiePairs = {
[MOCK_AUTH_COOKIES.MOCK_CSRF_TOKEN]: tokenCsrf,
[MOCK_AUTH_COOKIES.MOCK_REFRESH_TOKEN]: tokenRefresh,
[MOCK_AUTH_COOKIES.MOCK_ACCESS_TOKEN]: tokenAccess
};
Object.entries(cookiePairs).forEach(([k, v]) => {
cookieStore.set(k, v, {
...mockAuthCookieConfig,
expires: oneMonth,
} as ResponseCookie);
})
}
}
```
```
// app/auth/page.tsx
export default async function AuthPage({ searchParams }: AuthPageParams) {
await setMockAuthCookiesIfApplicable(searchParams);
redirect(searchParams.redirectTo || '/');
}
```
Now, when visiting the URL, it just throw straight error

What am I missing here?
### Expected Behavior
`cookie.set` should work as doc described.
### To Reproduce
The code to reproduce is included in the description. I have nothing special included in the `layout.tsx` file. | examples | low | Critical |
2,524,450,787 | flutter | Reversed ListView range is not maintained when using RangeMaintainingScrollPhysics | ### Steps to reproduce
- Run the provided sample.
- Scroll to the top and notice that the range in the list view has changed to the size of the recently added list items, rather than maintaining the same range. For example, scroll to the top, where you see 'Item 0,' and wait for the next item to be added to the list view. When the new item is added, the list view automatically scrolls down to the size of newly added item, resulting in a change in the range.
### Expected results
As we are using `RangeMaintainingScrollPhysics `in the list view, the list should stay stable while scrolling, even when items are added dynamically.
### Actual results
The list view automatically scrolled down to the size of the newly added item.

### Code sample
```dart
import 'dart:async';
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatefulWidget {
const MainApp({super.key});
@override
State<MainApp> createState() => _MainAppState();
}
class _MainAppState extends State<MainApp> {
late List<String> _baseItems;
final List<String> _loadingItems = <String>[];
@override
void initState() {
_baseItems = List<String>.generate(10000, (index) => 'Item $index');
Timer.periodic(const Duration(seconds: 1), (timer) {
if (_baseItems.isEmpty) {
timer.cancel();
return;
}
setState(() {
_loadingItems.add(_baseItems.removeAt(0));
});
});
super.initState();
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: ListView.builder(
reverse: true,
physics: const RangeMaintainingScrollPhysics(),
itemCount: _loadingItems.length,
itemBuilder: (BuildContext context, int index) {
return Padding(
padding: const EdgeInsets.all(5.0),
child: ListTile(
title: Text('Item ${_loadingItems.length - index - 1}'),
tileColor: Colors.blue[100],
),
);
},
),
),
);
}
@override
void dispose() {
_baseItems.clear();
_loadingItems.clear();
super.dispose();
}
}
```
### Flutter Doctor output
```console
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.22621.4169], locale en-US)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
X Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/windows-android-setup for more details.
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Enterprise 2022 17.11.3)
[!] Android Studio (not installed)
[√] VS Code (version 1.93.0)
[√] Connected device (3 available)
[√] Network resources
! Doctor found issues in 2 categories.
``` | c: new feature,framework,f: scrolling,has reproducible steps,P3,team-framework,triaged-framework,found in release: 3.24,found in release: 3.26 | low | Major |
2,524,452,337 | electron | [Bug]: Support URL as input for net.fetch | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
30.4.0
### What operating system(s) are you using?
macOS
### Operating System Version
Sonoma
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
Web API `fetch` is specified as also accepting an URL as the first parameter. Node.js' `fetch` supports URL. Electron's implementation appears to tolerate an URL. It would be very convenient to align with others and make what already works explicit in the typings.
Web API `fetch`: https://developer.mozilla.org/en-US/docs/Web/API/Window/fetch
Where I think Electron passes the first parameter in a way that accepts an URL: https://github.com/electron/electron/blob/1c3a5ba5d17c18cbc1fc096d2a05fc24f2b2ddee/lib/browser/api/net-fetch.ts#L21
### Actual Behavior
URL is missing from the docs that are used to generate `electron.d.ts`: https://github.com/electron/electron/blob/1c3a5ba5d17c18cbc1fc096d2a05fc24f2b2ddee/docs/api/net.md#L68
### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/macOS,documentation :notebook:,bug :beetle:,stale,30-x-y | low | Critical |
2,524,540,305 | pytorch | `torch.export.load` hangs indefinitely for exported `dlrm_v2` model | ### 🐛 Describe the bug
We tried to save and load the torch.exported dlrm_v2 model(97.5GB), the model repository is: https://github.com/mlcommons/inference/tree/master/recommendation/dlrm_v2/pytorch
```python
from torch._export import capture_pre_autograd_graph
multi_hot = [
3,
2,
1,
2,
6,
1,
1,
1,
1,
7,
3,
8,
1,
6,
9,
5,
1,
1,
1,
12,
100,
27,
10,
3,
1,
1,
]
max_batchsize = 64
dsx = torch.randn((max_batchsize, 13), dtype=torch.float)
lsi = [torch.ones((max_batchsize * h), dtype=torch.long) for h in multi_hot]
lso = [
torch.arange(0, (max_batchsize + 1) * h, h, dtype=torch.long) for h in multi_hot
]
graph_model = capture_pre_autograd_graph(model, (dsx, lsi, lso))
exported_model = torch.export.export(graph_model, example_inputs)
torch.export.save(exported_model, model_file_path)
loaded_model = torch.export.load(model_file_path)
```
The script hangs indefinitely on this line `loaded_model = torch.export.load(model_file_path)`. Please take a look and help investigate the root cause for this. Thanks.
### Versions
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,524,552,608 | PowerToys | Can the connection of the unbounded mouse remember the password and host of the previous connection | ### Description of the new feature / enhancement
For example, I have two environments, an office environment and a home environment. Every time I change the environment, I need to re-enter the host and connection key. Can I add and remember the keys and hosts that have already been connected, so that I can directly click on them for the next connection
### Scenario when this would be used?
For example, I have two environments, an office environment and a home environment. Every time I change the environment, I need to re-enter the host and connection key. Can I add and remember the keys and hosts that have already been connected, so that I can directly click on them for the next connection
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,524,568,493 | rust | Possibly sub-optimal optimization of few bytes copy | This code contains three examples. foo0 and foo2 are optimized well (with a mov + 4 bytes istruction), while foo1 shows single byte copies.
```rust
pub fn foo0(data: &mut [u8]) -> &[u8] {
if data.len() >= 5 {
data[0] = b'F';
data[1] = b'a';
data[2] = b'l';
data[3] = b's';
data[4] = b'e';
&data[.. 5]
} else {
&[]
}
}
pub fn foo1(data: &mut [u8]) -> &[u8] {
if data.len() >= 5 {
data[0] = b'F';
data[1] = b'a';
data[2] = b'l';
data[3] = b's';
data[4] = b'e';
&data[.. 5]
} else if data.len() >= 4 {
data[0] = b'T';
data[1] = b'r';
data[2] = b'u';
data[3] = b'e';
&data[.. 4]
} else {
&[]
}
}
pub fn foo2(data: &mut [u8]) -> &[u8] {
if data.len() >= 5 {
data[.. 5].copy_from_slice(&[b'F', b'a', b'l', b's', b'e']);
&data[.. 5]
} else if data.len() >= 4 {
data[.. 4].copy_from_slice(&[b'T', b'r', b'u', b'e']);
&data[.. 4]
} else {
&[]
}
}
fn main() {}
```
Using the godbolt site with:
```
rustc 1.83.0-nightly (8d6b88b16 2024-09-11)
binary: rustc
commit-hash: 8d6b88b168e45ee1624699c19443c49665322a91
commit-date: 2024-09-11
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
Compilation using -C opt-level=3 and other aggressive optimization flags.
The asm is:
```asm
foo0:
cmp rsi, 4
jbe .LBB0_1
mov rax, rdi
mov dword ptr [rdi], 1936482630
mov byte ptr [rdi + 4], 101
mov edx, 5
ret
.LBB0_1:
mov eax, 1
xor edx, edx
ret
foo1:
mov rax, rdi
cmp rsi, 4
jbe .LBB1_1
mov word ptr [rax], 24902
mov edx, 5
mov ecx, 4
mov sil, 115
mov edi, 3
mov r8b, 108
mov r9d, 2
mov byte ptr [rax + r9], r8b
mov byte ptr [rax + rdi], sil
mov byte ptr [rax + rcx], 101
ret
.LBB1_1:
jne .LBB1_2
mov byte ptr [rax], 84
mov edx, 4
mov ecx, 3
mov sil, 117
mov edi, 2
mov r8b, 114
mov r9d, 1
mov byte ptr [rax + r9], r8b
mov byte ptr [rax + rdi], sil
mov byte ptr [rax + rcx], 101
ret
.LBB1_2:
mov eax, 1
xor edx, edx
ret
foo2:
mov rax, rdi
cmp rsi, 4
jbe .LBB2_1
mov byte ptr [rax + 4], 101
mov dword ptr [rax], 1936482630
mov edx, 5
ret
.LBB2_1:
jne .LBB2_2
mov dword ptr [rax], 1702195796
mov edx, 4
ret
.LBB2_2:
mov eax, 1
xor edx, edx
ret
``` | T-compiler,C-bug,I-heavy,C-optimization | low | Minor |
2,524,572,089 | transformers | Support context parallel training with ring-flash-attention | ### Feature request
Hi, I'm the author of [zhuzilin/ring-flash-attention](https://github.com/zhuzilin/ring-flash-attention).
I wonder if you are interested in integrating context parallel with [zhuzilin/ring-flash-attention](https://github.com/zhuzilin/ring-flash-attention), so that user can train llm with long data more efficiently.
### Motivation
As openai o1 released, it will probably be common for people to train model with really long cot data. And it will be nice if most model within the transformers library can support training with long context efficiently with certain type of context parallel, i.e. the context length scale linearly with the number of GPUs.
The 3 existing context parallel methods are the deepspeed ulysses, ring attention and the one proposed in the [llama3 tech report](https://arxiv.org/abs/2407.21783). The deepspeed ulysses will be limited by the number of kv heads (the maximum context length can be `num_head_kv * seq_length_per_gpu`), which makes it a little unfriendly to GQA models. So it will be great if the transformers library could support the one or both of the other 2 context parallel methods.
And both ring attention and the llama3 strategy are supported with flash attention in [zhuzilin/ring-flash-attention](https://github.com/zhuzilin/ring-flash-attention), whose correctness has been proved by [jzhang38/EasyContext](https://github.com/jzhang38/EasyContext). The library basically has the same api as flash attention, and hides the communication required from its user to make it a easy substitution from any origin flash attention api callsite.
Therefore, I believe it will be easy to support the context parallel with [zhuzilin/ring-flash-attention](https://github.com/zhuzilin/ring-flash-attention). For example, we could have different branch in `modeling_flash_attention_utils._flash_attention_forward`.
### Your contribution
I'd love to help if you have interests :) | Feature request,Flash Attention | low | Major |
2,524,576,611 | flutter | DropdownButtonFormField in Wrap inside of SliverFillRemaining with hasScrollBody=false breaks rendering | ### Steps to reproduce
Having a DropdownButtonFormField in a Wrap inside of SliverFillRemaining with hasScrollBody=false breaks the rendering immediately.
The gist contains the smallest possible reproduction I could build.
My goal is to have a pinned SliverAppBar above the SliverFillRemaining, where the page does not scroll, when the body inside SliverFillRemaining is not bigger than the remaining space.
### Expected results
Everything should render normally.
### Actual results
Doesn't render, instead the following exception is thrown.
<details open><summary>Exception thrown when trying to render the code</summary>
```
══╡ EXCEPTION CAUGHT BY RENDERING LIBRARY ╞═════════════════════════════════════════════════════════
The following assertion was thrown during performLayout():
The RenderPositionedBox class does not implement "computeDryBaseline".
If you are not writing your own RenderBox subclass, then this is not
your fault. Contact support: https://github.com/flutter/flutter/issues/new?template=2_bug.yml
Widget creation tracking is currently disabled. Enabling it enables improved error messages. It can
be enabled by passing `--track-widget-creation` to `flutter run` or `flutter test`.
When the exception was thrown, this was the stack
The following RenderObject was being processed when the exception was fired: RenderSliverFillRemaining#9b775 relayoutBoundary=up1 NEEDS-LAYOUT NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE:
creator: _SliverFillRemainingWithoutScrollable ← SliverFillRemaining ← Viewport ←
IgnorePointer-[GlobalKey#abbe1] ← Semantics ← Listener ← _GestureSemantics ←
RawGestureDetector-[LabeledGlobalKey<RawGestureDetectorState>#d9d01] ← Listener ← _ScrollableScope
← _ScrollSemantics-[GlobalKey#573b2] ← NotificationListener<ScrollMetricsNotification> ← ⋯
parentData: paintOffset=Offset(0.0, 0.0) (can use size)
constraints: SliverConstraints(AxisDirection.down, GrowthDirection.forward, ScrollDirection.idle,
scrollOffset: 0.0, precedingScrollExtent: 0.0, remainingPaintExtent: 859.0, crossAxisExtent:
852.0, crossAxisDirection: AxisDirection.right, viewportMainAxisExtent: 859.0,
remainingCacheExtent: 1109.0, cacheOrigin: 0.0)
geometry: null
This RenderObject had the following descendants (showing up to depth 5):
child: RenderWrap#b2322 NEEDS-LAYOUT NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE
child 1: RenderParagraph#433a5 NEEDS-LAYOUT NEEDS-PAINT
text: TextSpan
child 2: RenderSemanticsAnnotations#d210e NEEDS-LAYOUT NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE
child: RenderSemanticsAnnotations#a34b8 NEEDS-LAYOUT NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE
child: RenderSemanticsAnnotations#32f0c NEEDS-LAYOUT NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE
child: RenderMouseRegion#8c9f8 NEEDS-LAYOUT NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE
════════════════════════════════════════════════════════════════════════════════════════════════════
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Invalid argument: null
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
Another exception was thrown: Unexpected null value.
```
</details>
### Code sample
https://dartpad.dev/?id=b4251bc47c20c29443c240203a6ec8e0
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on EndeavourOS 6.10.9-zen1-2-zen, locale de_DE.UTF-8)
• Flutter version 3.24.3 on channel stable at /opt/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (vor 2 Tagen), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /home/dev/Android/Sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /home/dev/.local/share/JetBrains/Toolbox/apps/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Android Studio (version 2024.1)
• Android Studio at /home/dev/.local/share/JetBrains/Toolbox/apps/android-studio
• Flutter plugin version 81.0.2
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] Connected device (2 available)
• HD1911 (mobile) • 91e815a8 • android-arm64 • Android 14 (API 34)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 14 (API 34) (emulator)
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,framework,f: material design,a: error message,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.26 | low | Critical |
2,524,615,250 | deno | LSP hangs and reaches heap limit | Version: Deno 1.46.3, 1.46.2, ...?
I've encountered this bug in large repo durign different code-editing scenarios, but it was hard to determine where the issue origins from. I've managed to create a [repro repo](https://github.com/albnnc/deno-repro-20240913), but LSP hangs only after a particular code edit. Note that
- the file didn't get saved, LSP has got stuck only after typing what I've typed;
- the issue reproduces (almost?) every time you type shown changes.
https://github.com/user-attachments/assets/81353b43-4cf1-4ebc-9404-d4ca73458709
<details>
<summary>Complete LS log after heap limit has been reached</summary>
```
Starting Deno language server...
version: 1.46.3 (release, aarch64-apple-darwin)
executable: /Users/albnnc/.deno/bin/deno
Connected to "Visual Studio Code" 1.93.1
Enabling import suggestions for: https://deno.land
Refreshing configuration tree...
Resolved Deno configuration file: "file:///Users/albnnc/Code/deno-repro-130924/deno.json"
Resolved .npmrc: "/Users/albnnc/.npmrc"
Resolved lockfile: "file:///Users/albnnc/Code/deno-repro-130924/deno.lock"
Server ready.
[Info - 14:13:38] Connection to server got closed. Server will restart.
Starting Deno language server...
version: 1.46.3 (release, aarch64-apple-darwin)
executable: /Users/albnnc/.deno/bin/deno
Connected to "Visual Studio Code" 1.93.1
Enabling import suggestions for: https://deno.land
Refreshing configuration tree...
Resolved Deno configuration file: "file:///Users/albnnc/Code/deno-repro-130924/deno.json"
Resolved .npmrc: "/Users/albnnc/.npmrc"
Resolved lockfile: "file:///Users/albnnc/Code/deno-repro-130924/deno.lock"
[Error - 14:13:38] Request textDocument/codeAction failed.
Message: The position is out of range.
Code: -32602
Server ready.
<--- Last few GCs --->
[46676:0x128008000] 154815 ms: Scavenge (interleaved) 9994.6 (10003.9) -> 9994.5 (10004.9) MB, pooled: 0 MB, 26.92 / 0.00 ms (average mu = 0.390, current mu = 0.239) allocation failure;
[46676:0x128008000] 156188 ms: Mark-Compact 9995.5 (10004.9) -> 9995.3 (10010.9) MB, pooled: 0 MB, 1371.21 / 0.00 ms (average mu = 0.234, current mu = 0.033) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
#
# Fatal JavaScript out of memory: Reached heap limit
#
==== C stack trace ===============================
0 deno 0x00000001018d686c v8::base::debug::StackTrace::StackTrace() + 24
1 deno 0x00000001018dc214 v8::platform::(anonymous namespace)::PrintStackTrace() + 24
2 deno 0x00000001018d36b4 v8::base::FatalOOM(v8::base::OOMType, char const*) + 68
3 deno 0x0000000101925e34 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) + 616
4 deno 0x0000000101aeea18 v8::internal::Heap::stack() + 0
5 deno 0x0000000101aecd8c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) + 916
6 deno 0x0000000101ab7540 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) + 2292
7 deno 0x0000000101ab7f2c v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) + 44
8 deno 0x0000000101a9f144 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) + 624
9 deno 0x0000000101ebf1e0 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) + 224
10 deno 0x0000000102c4f9f4 Builtins_CEntry_Return1_ArgvOnStack_NoBuiltinExit + 84
11 deno 0x0000000102bbb62c Builtins_GrowFastSmiOrObjectElements + 332
12 ??? 0x0000000106d1525c 0x0 + 4409348700
13 ??? 0x00000001072f0840 0x0 + 4415490112
14 ??? 0x000000010736ddfc 0x0 + 4416003580
15 ??? 0x0000000106d14d48 0x0 + 4409347400
16 ??? 0x00000001072f5f80 0x0 + 4415512448
17 ??? 0x000000010736ddfc 0x0 + 4416003580
18 ??? 0x0000000107331bc8 0x0 + 4415757256
19 ??? 0x0000000106d98814 0x0 + 4409886740
20 ??? 0x000000010736ddfc 0x0 + 4416003580
21 ??? 0x0000000106d98a14 0x0 + 4409887252
22 ??? 0x000000010736ddfc 0x0 + 4416003580
23 ??? 0x0000000106d98b4c 0x0 + 4409887564
24 ??? 0x000000010736ddfc 0x0 + 4416003580
25 ??? 0x0000000106d8cbe4 0x0 + 4409838564
26 ??? 0x000000010736ddfc 0x0 + 4416003580
27 ??? 0x0000000106d14d48 0x0 + 4409347400
28 ??? 0x00000001072f3160 0x0 + 4415500640
29 ??? 0x000000010736ddfc 0x0 + 4416003580
30 ??? 0x0000000106d8cbe4 0x0 + 4409838564
31 ??? 0x000000010736ddfc 0x0 + 4416003580
32 ??? 0x000000010728cc04 0x0 + 4415081476
33 ??? 0x000000010736ddfc 0x0 + 4416003580
34 ??? 0x0000000106d14d48 0x0 + 4409347400
35 ??? 0x0000000106d00340 0x0 + 4409262912
36 ??? 0x000000010736ddfc 0x0 + 4416003580
37 ??? 0x0000000106d98a14 0x0 + 4409887252
38 ??? 0x000000010736ddfc 0x0 + 4416003580
39 ??? 0x0000000106d98b4c 0x0 + 4409887564
40 ??? 0x000000010736ddfc 0x0 + 4416003580
41 ??? 0x0000000106d98b4c 0x0 + 4409887564
42 ??? 0x000000010736ddfc 0x0 + 4416003580
43 ??? 0x0000000107355ce4 0x0 + 4415904996
44 ??? 0x000000010736ddfc 0x0 + 4416003580
45 ??? 0x0000000106d14d48 0x0 + 4409347400
46 ??? 0x0000000106d00340 0x0 + 4409262912
47 ??? 0x000000010736ddfc 0x0 + 4416003580
48 ??? 0x0000000106d98b4c 0x0 + 4409887564
49 ??? 0x000000010736ddfc 0x0 + 4416003580
50 ??? 0x0000000106d98a14 0x0 + 4409887252
51 ??? 0x000000010736ddfc 0x0 + 4416003580
52 ??? 0x0000000107355ce4 0x0 + 4415904996
53 ??? 0x000000010736ddfc 0x0 + 4416003580
54 ??? 0x0000000106d14d48 0x0 + 4409347400
55 ??? 0x0000000106d00340 0x0 + 4409262912
56 ??? 0x000000010736ddfc 0x0 + 4416003580
57 ??? 0x0000000106d98a14 0x0 + 4409887252
58 ??? 0x000000010736ddfc 0x0 + 4416003580
59 ??? 0x0000000106d98a14 0x0 + 4409887252
60 ??? 0x000000010736ddfc 0x0 + 4416003580
61 ??? 0x0000000107355ce4 0x0 + 4415904996
[Info - 14:16:14] Connection to server got closed. Server will restart.
Starting Deno language server...
version: 1.46.3 (release, aarch64-apple-darwin)
executable: /Users/albnnc/.deno/bin/deno
Connected to "Visual Studio Code" 1.93.1
Enabling import suggestions for: https://deno.land
Refreshing configuration tree...
Resolved Deno configuration file: "file:///Users/albnnc/Code/deno-repro-130924/deno.json"
Resolved .npmrc: "/Users/albnnc/.npmrc"
Resolved lockfile: "file:///Users/albnnc/Code/deno-repro-130924/deno.lock"
Server ready.
```
</details> | lsp | low | Critical |
2,524,628,390 | yt-dlp | Add site support for Extreme Music | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Italy
### Example URLs
- Single album: https://www.extrememusic.com/albums/6778
- Playlist: https://www.extrememusic.com/playlists/8Appff68pKK820pAfAffKApApIrEYdS_AAffKAfUpUKU5KK8UUAUpKU4Kv7s74S
### Provide a description that is worded well enough to be understood
I would like to see site support for extrememusic.com added to `yt-dlp`. I have provided example URLs for both album and playlist URL types. Please let me know if I can provide any additional information!
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-P', '~/Music/New Album', '-o', '%(playlist_index)s - %(track)s.%(ext)s', 'https://www.extrememusic.com/albums/6778']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208]
[debug] Lazy loading extractors is disabled
[debug] Python 3.12.6 (CPython arm64 64bit) - macOS-14.6.1-arm64-arm-64bit (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2 (fdk,setts), ffprobe 7.0.2, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.43.2, urllib3-2.2.3, websockets-13.0.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1832 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.extrememusic.com/albums/6778
[generic] 6778: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 6778: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.extrememusic.com/albums/6778
Traceback (most recent call last):
File "/Users/myuser/.virtualenvs/yt-dlp/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser/.virtualenvs/yt-dlp/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/Users/myuser/.virtualenvs/yt-dlp/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 740, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser/.virtualenvs/yt-dlp/lib/python3.12/site-packages/yt_dlp/extractor/generic.py", line 2526, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.extrememusic.com/albums/6778
```
| site-request,triage | low | Critical |
2,524,696,270 | godot | ProjectSetting.load_resource_pack will not overwrite GDScript functions. | ### Tested versions
4.2 stable
### System information
macOS
### Issue description
I'm creating a mobile game and I'd like to implement a proper patch system. I read the document, ProjectSetting.load_resource_pack should do the trick.
So I did a experiment, create a simple GDScript function like below
```
class_name UtilStr
extends RefCounted
static func test_patch() -> void:
print("original")
```
And with a slightly modified version
```
class_name UtilStr
extends RefCounted
static func test_patch() -> void:
print("patched version")
```
I packed a runnable app with original version. And pack a patch only contains the modified version to user://patch.pck.
I wrote a simple function to load the patch in the first scene's _ready() function like below
```
const patch_path := "user://patch.pck"
if FileAccess.file_exists(patch_path):
if not ProjectSettings.load_resource_pack(patch_path):
print_debug("load %s failed" % [patch_path])
return
UtilStr.test_patch()
```
I expect "patched version" is print in console, but I got "original". Seems the ProjectSetting.load_resource_pack doesn't overwrite gdscript?
If it is designed to work like this? How to implement a proper patch system for mobile game?
### Steps to reproduce
1. Pack runnable for the game.
2. Modify one function in GDScript and pack only the changed file to patch.pck
3. Run the game.
### Minimal reproduction project (MRP)
N/A | topic:gdscript,needs testing,topic:export | low | Critical |
2,524,722,019 | rust | Compiler error on build after "git switch master" | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
I'm not entirely sure what caused this and re-cloning the repo seems to have fixed it but I'll give all the information I can think of
Rust is installed via nix flake on nixos 24.05 (flake is on unstable) (the exact flake.nix and flake.lock are in the github repo mentioned below)
I was on a different branch developing a new feature, when I git switch'd back to master it refused to build with the error I've provided
Have just renamed the old cloned repo, haven't uploaded it for now because I don't want to accidentally leak secrets but can zip it up if need be
### Code
https://github.com/flashgnash/ordis/tree/e9a69059eafe4d12053cdcb40b46fda89c952c93
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.78.0 (9b00956e5 2024-04-29) (built from a source tarball)
binary: rustc
commit-hash: 9b00956e56009bab2aa15d7bff10916599e3d6d6
commit-date: 2024-04-29
host: x86_64-unknown-linux-gnu
release: 1.78.0
LLVM version: 18.1.7
```
### Error output
```
thread 'rustc' panicked at compiler/rustc_middle/src/dep_graph/dep_node.rs:198:17:
Failed to extract DefId: def_kind 15bfbc7bb9f67908-3a980503a83c2f72
stack backtrace:
0: 0x7fcfb3b12a65 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h4a5853078ed6ad5e
1: 0x7fcfb3b8fbfc - core::fmt::write::hcb5164cc0001627e
2: 0x7fcfb3b117f5 - std::io::Write::write_fmt::h4f047c3956a46825
3: 0x7fcfb3b12834 - std::sys_common::backtrace::print::h33b76037fd95b2fe
4: 0x7fcfb3b28023 - std::panicking::default_hook::{{closure}}::hd7fd2e4913eae4ac
5: 0x7fcfb3b27d7a - std::panicking::default_hook::h13a9a2eb684db87c
6: 0x7fcfb44296b8 - <alloc[5cc0e8a89cb545a4]::boxed::Box<rustc_driver_impl[931ba922f877923b]::install_ice_hook::{closure#0}> as core[ac22999f73547012]::ops::function::Fn<(&dyn for<'a, 'b> core[ac22999f73547012]::ops::function::Fn<(&'a core[ac22999f73547012]::panic::panic_info::PanicInfo<'b>,), Output = ()> + core[ac22999f73547012]::marker::Send + core[ac22999f73547012]::marker::Sync, &core[ac22999f73547012]::panic::panic_info::PanicInfo)>>::call
7: 0x7fcfb3b286f8 - std::panicking::rust_panic_with_hook::hda39260d38680d2a
8: 0x7fcfb3b131b2 - std::panicking::begin_panic_handler::{{closure}}::h150d47ece314bd47
9: 0x7fcfb3b12c76 - std::sys_common::backtrace::__rust_end_short_backtrace::hf5ef31e5afaaca81
10: 0x7fcfb3b28294 - rust_begin_unwind
11: 0x7fcfb3af6765 - core::panicking::panic_fmt::h9a659e19ee4930b5
12: 0x7fcfb69db9c0 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::dep_node::DepNode as rustc_middle[7d1bc35d0b9de3f6]::dep_graph::dep_node::DepNodeExt>::extract_def_id::{closure#0}
13: 0x7fcfb6ab9dee - <rustc_middle[7d1bc35d0b9de3f6]::ty::context::TyCtxt>::def_path_hash_to_def_id
14: 0x7fcfb69db935 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::dep_node::DepNode as rustc_middle[7d1bc35d0b9de3f6]::dep_graph::dep_node::DepNodeExt>::extract_def_id
15: 0x7fcfb59c0991 - <rustc_query_impl[6a0caddc8ed0f104]::plumbing::query_callback<rustc_query_impl[6a0caddc8ed0f104]::query_impl::def_kind::QueryType>::{closure#0} as core[ac22999f73547012]::ops::function::FnOnce<(rustc_middle[7d1bc35d0b9de3f6]::ty::context::TyCtxt, rustc_query_system[a49b8714d4c5fecc]::dep_graph::dep_node::DepNode)>>::call_once
16: 0x7fcfb595cb54 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
17: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
18: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
19: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
20: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
21: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
22: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
23: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
24: 0x7fcfb595c8e8 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
25: 0x7fcfb5bdc672 - rustc_query_system[a49b8714d4c5fecc]::query::plumbing::try_execute_query::<rustc_query_impl[6a0caddc8ed0f104]::DynamicConfig<rustc_query_system[a49b8714d4c5fecc]::query::caches::DefaultCache<rustc_type_ir[68759b59dcc07dd8]::canonical::Canonical<rustc_middle[7d1bc35d0b9de3f6]::ty::context::TyCtxt, rustc_middle[7d1bc35d0b9de3f6]::ty::ParamEnvAnd<rustc_middle[7d1bc35d0b9de3f6]::traits::query::type_op::ProvePredicate>>, rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt, true>
26: 0x7fcfb5b9c46e - rustc_query_impl[6a0caddc8ed0f104]::query_impl::type_op_prove_predicate::get_query_incr::__rust_end_short_backtrace
27: 0x7fcfb5e6e6a6 - <rustc_middle[7d1bc35d0b9de3f6]::traits::query::type_op::ProvePredicate as rustc_trait_selection[dcf608d57be48396]::traits::query::type_op::QueryTypeOp>::perform_query
28: 0x7fcfb57f0e8b - <rustc_middle[7d1bc35d0b9de3f6]::traits::query::type_op::ProvePredicate as rustc_trait_selection[dcf608d57be48396]::traits::query::type_op::QueryTypeOp>::fully_perform_into
29: 0x7fcfb58049c7 - <rustc_borrowck[666edc9f3b3acd9c]::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle[7d1bc35d0b9de3f6]::ty::ParamEnvAnd<rustc_middle[7d1bc35d0b9de3f6]::traits::query::type_op::ProvePredicate>>
30: 0x7fcfb58083e8 - <rustc_borrowck[666edc9f3b3acd9c]::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
31: 0x7fcfb57fb33c - <rustc_borrowck[666edc9f3b3acd9c]::type_check::TypeVerifier as rustc_middle[7d1bc35d0b9de3f6]::mir::visit::Visitor>::visit_constant
32: 0x7fcfb57fc3f4 - <rustc_borrowck[666edc9f3b3acd9c]::type_check::TypeVerifier as rustc_middle[7d1bc35d0b9de3f6]::mir::visit::Visitor>::visit_body
33: 0x7fcfb57f69d9 - rustc_borrowck[666edc9f3b3acd9c]::type_check::type_check
34: 0x7fcfb57a3a01 - rustc_borrowck[666edc9f3b3acd9c]::nll::compute_regions
35: 0x7fcfb565af5d - rustc_borrowck[666edc9f3b3acd9c]::do_mir_borrowck
36: 0x7fcfb564c4af - rustc_borrowck[666edc9f3b3acd9c]::mir_borrowck
37: 0x7fcfb59f0d0c - rustc_query_impl[6a0caddc8ed0f104]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[6a0caddc8ed0f104]::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 8usize]>>
38: 0x7fcfb5ae0bec - <rustc_query_impl[6a0caddc8ed0f104]::query_impl::mir_borrowck::dynamic_query::{closure#2} as core[ac22999f73547012]::ops::function::FnOnce<(rustc_middle[7d1bc35d0b9de3f6]::ty::context::TyCtxt, rustc_span[884527523b9bf0a3]::def_id::LocalDefId)>>::call_once
39: 0x7fcfb5c20cd9 - rustc_query_system[a49b8714d4c5fecc]::query::plumbing::try_execute_query::<rustc_query_impl[6a0caddc8ed0f104]::DynamicConfig<rustc_query_system[a49b8714d4c5fecc]::query::caches::VecCache<rustc_span[884527523b9bf0a3]::def_id::LocalDefId, rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt, true>
40: 0x7fcfb5af34dc - rustc_query_impl[6a0caddc8ed0f104]::query_impl::mir_borrowck::get_query_incr::__rust_end_short_backtrace
41: 0x7fcfb461291e - std[c2020301f641288b]::panicking::try::<(), core[ac22999f73547012]::panic::unwind_safe::AssertUnwindSafe<rustc_data_structures[57985108c29413d2]::sync::parallel::disabled::par_for_each_in<&[rustc_span[884527523b9bf0a3]::def_id::LocalDefId], <rustc_middle[7d1bc35d0b9de3f6]::hir::map::Map>::par_body_owners<rustc_interface[8e1741ee6322df64]::passes::analysis::{closure#1}::{closure#0}>::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>>
42: 0x7fcfb462388b - rustc_data_structures[57985108c29413d2]::sync::parallel::disabled::par_for_each_in::<&[rustc_span[884527523b9bf0a3]::def_id::LocalDefId], <rustc_middle[7d1bc35d0b9de3f6]::hir::map::Map>::par_body_owners<rustc_interface[8e1741ee6322df64]::passes::analysis::{closure#1}::{closure#0}>::{closure#0}>
43: 0x7fcfb460d156 - <rustc_session[526f1292888eebe7]::session::Session>::time::<(), rustc_interface[8e1741ee6322df64]::passes::analysis::{closure#1}>
44: 0x7fcfb45dc524 - rustc_interface[8e1741ee6322df64]::passes::analysis
45: 0x7fcfb59f442a - rustc_query_impl[6a0caddc8ed0f104]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[6a0caddc8ed0f104]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 1usize]>>
46: 0x7fcfb59a68c8 - <rustc_query_impl[6a0caddc8ed0f104]::query_impl::analysis::dynamic_query::{closure#2} as core[ac22999f73547012]::ops::function::FnOnce<(rustc_middle[7d1bc35d0b9de3f6]::ty::context::TyCtxt, ())>>::call_once
47: 0x7fcfb5bcd417 - rustc_query_system[a49b8714d4c5fecc]::query::plumbing::try_execute_query::<rustc_query_impl[6a0caddc8ed0f104]::DynamicConfig<rustc_query_system[a49b8714d4c5fecc]::query::caches::SingleCache<rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt, true>
48: 0x7fcfb5aeda4e - rustc_query_impl[6a0caddc8ed0f104]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
49: 0x7fcfb43f9b71 - <rustc_middle[7d1bc35d0b9de3f6]::ty::context::GlobalCtxt>::enter::<rustc_driver_impl[931ba922f877923b]::run_compiler::{closure#0}::{closure#1}::{closure#3}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>
50: 0x7fcfb43f6d50 - rustc_span[884527523b9bf0a3]::create_session_globals_then::<core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>, rustc_interface[8e1741ee6322df64]::interface::run_compiler<core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>, rustc_driver_impl[931ba922f877923b]::run_compiler::{closure#0}>::{closure#0}>
51: 0x7fcfb443b6b0 - std[c2020301f641288b]::sys_common::backtrace::__rust_begin_short_backtrace::<rustc_interface[8e1741ee6322df64]::util::run_in_thread_with_globals<rustc_interface[8e1741ee6322df64]::interface::run_compiler<core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>, rustc_driver_impl[931ba922f877923b]::run_compiler::{closure#0}>::{closure#0}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>
52: 0x7fcfb44178c8 - <<std[c2020301f641288b]::thread::Builder>::spawn_unchecked_<rustc_interface[8e1741ee6322df64]::util::run_in_thread_with_globals<rustc_interface[8e1741ee6322df64]::interface::run_compiler<core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>, rustc_driver_impl[931ba922f877923b]::run_compiler::{closure#0}>::{closure#0}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>::{closure#1} as core[ac22999f73547012]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
53: 0x7fcfb3b497e5 - std::sys::pal::unix::thread::Thread::new::thread_start::h0e92b076ec0ac240
54: 0x7fcfb3951272 - start_thread
55: 0x7fcfb39ccdec - __GI___clone3
56: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.78.0 (9b00956e5 2024-04-29) (built from a source tarball) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<diesel::query_builder::select_statement::SelectStatement<diesel::query_builder::from_clause::FromClause<db::schema::users::table>, diesel::query_builder::select_clause::DefaultSelectClause<diesel::query_builder::from_clause::FromClause<db::schema::users::table>>, diesel::query_builder::distinct_clause::NoDistinctClause, diesel::query_builder::where_clause::WhereClause<diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db::schema::users::columns::id, diesel::expression::bound::Bound<diesel::sql_types::Text, &alloc::string::String>>>>> as diesel::query_builder::update_statement::target::IntoUpdateTarget>, polarity:Positive), bound_vars: [] } }`
#1 [mir_borrowck] borrow-checking `db::users::update`
#2 [analysis] running analysis passes on this crate
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 representability(3b4ce1e0f02f78db-243ddabf0b0fed5c)
#1 adt_sized_constraint(thread 'rustc' panicked at compiler/rustc_middle/src/dep_graph/dep_node.rs:198:17:
Failed to extract DefId: adt_sized_constraint 15bfbc7bb9f67908-3a980503a83c2f72
stack backtrace:
0: 0x7fcfb3b12a65 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h4a5853078ed6ad5e
1: 0x7fcfb3b8fbfc - core::fmt::write::hcb5164cc0001627e
2: 0x7fcfb3b117f5 - std::io::Write::write_fmt::h4f047c3956a46825
3: 0x7fcfb3b12834 - std::sys_common::backtrace::print::h33b76037fd95b2fe
4: 0x7fcfb3b28023 - std::panicking::default_hook::{{closure}}::hd7fd2e4913eae4ac
5: 0x7fcfb3b27d7a - std::panicking::default_hook::h13a9a2eb684db87c
6: 0x7fcfb44296b8 - <alloc[5cc0e8a89cb545a4]::boxed::Box<rustc_driver_impl[931ba922f877923b]::install_ice_hook::{closure#0}> as core[ac22999f73547012]::ops::function::Fn<(&dyn for<'a, 'b> core[ac22999f73547012]::ops::function::Fn<(&'a core[ac22999f73547012]::panic::panic_info::PanicInfo<'b>,), Output = ()> + core[ac22999f73547012]::marker::Send + core[ac22999f73547012]::marker::Sync, &core[ac22999f73547012]::panic::panic_info::PanicInfo)>>::call
7: 0x7fcfb3b286f8 - std::panicking::rust_panic_with_hook::hda39260d38680d2a
8: 0x7fcfb3b131b2 - std::panicking::begin_panic_handler::{{closure}}::h150d47ece314bd47
9: 0x7fcfb3b12c76 - std::sys_common::backtrace::__rust_end_short_backtrace::hf5ef31e5afaaca81
10: 0x7fcfb3b28294 - rust_begin_unwind
11: 0x7fcfb3af6765 - core::panicking::panic_fmt::h9a659e19ee4930b5
12: 0x7fcfb69db9c0 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::dep_node::DepNode as rustc_middle[7d1bc35d0b9de3f6]::dep_graph::dep_node::DepNodeExt>::extract_def_id::{closure#0}
13: 0x7fcfb6ab9dee - <rustc_middle[7d1bc35d0b9de3f6]::ty::context::TyCtxt>::def_path_hash_to_def_id
14: 0x7fcfb69db935 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::dep_node::DepNode as rustc_middle[7d1bc35d0b9de3f6]::dep_graph::dep_node::DepNodeExt>::extract_def_id
15: 0x7fcfb4664926 - rustc_interface[8e1741ee6322df64]::callbacks::dep_node_debug
16: 0x7fcfb6b14266 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::dep_node::DepNode as core[ac22999f73547012]::fmt::Debug>::fmt
17: 0x7fcfb3b8fbfc - core::fmt::write::hcb5164cc0001627e
18: 0x7fcfb3b21806 - <&std::io::stdio::Stderr as std::io::Write>::write_fmt::h2688b7b88d9231b9
19: 0x7fcfb3b2213a - std::io::stdio::_eprint::h956e1cc5820419a2
20: 0x7fcfb42944b4 - rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::print_markframe_trace::<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>
21: 0x7fcfb595d1d8 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
22: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
23: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
24: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
25: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
26: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
27: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
28: 0x7fcfb595cbb4 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
29: 0x7fcfb595c8e8 - <rustc_query_system[a49b8714d4c5fecc]::dep_graph::graph::DepGraphData<rustc_middle[7d1bc35d0b9de3f6]::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt>
30: 0x7fcfb5bdc672 - rustc_query_system[a49b8714d4c5fecc]::query::plumbing::try_execute_query::<rustc_query_impl[6a0caddc8ed0f104]::DynamicConfig<rustc_query_system[a49b8714d4c5fecc]::query::caches::DefaultCache<rustc_type_ir[68759b59dcc07dd8]::canonical::Canonical<rustc_middle[7d1bc35d0b9de3f6]::ty::context::TyCtxt, rustc_middle[7d1bc35d0b9de3f6]::ty::ParamEnvAnd<rustc_middle[7d1bc35d0b9de3f6]::traits::query::type_op::ProvePredicate>>, rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt, true>
31: 0x7fcfb5b9c46e - rustc_query_impl[6a0caddc8ed0f104]::query_impl::type_op_prove_predicate::get_query_incr::__rust_end_short_backtrace
32: 0x7fcfb5e6e6a6 - <rustc_middle[7d1bc35d0b9de3f6]::traits::query::type_op::ProvePredicate as rustc_trait_selection[dcf608d57be48396]::traits::query::type_op::QueryTypeOp>::perform_query
33: 0x7fcfb57f0e8b - <rustc_middle[7d1bc35d0b9de3f6]::traits::query::type_op::ProvePredicate as rustc_trait_selection[dcf608d57be48396]::traits::query::type_op::QueryTypeOp>::fully_perform_into
34: 0x7fcfb58049c7 - <rustc_borrowck[666edc9f3b3acd9c]::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle[7d1bc35d0b9de3f6]::ty::ParamEnvAnd<rustc_middle[7d1bc35d0b9de3f6]::traits::query::type_op::ProvePredicate>>
35: 0x7fcfb58083e8 - <rustc_borrowck[666edc9f3b3acd9c]::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
36: 0x7fcfb57fb33c - <rustc_borrowck[666edc9f3b3acd9c]::type_check::TypeVerifier as rustc_middle[7d1bc35d0b9de3f6]::mir::visit::Visitor>::visit_constant
37: 0x7fcfb57fc3f4 - <rustc_borrowck[666edc9f3b3acd9c]::type_check::TypeVerifier as rustc_middle[7d1bc35d0b9de3f6]::mir::visit::Visitor>::visit_body
38: 0x7fcfb57f69d9 - rustc_borrowck[666edc9f3b3acd9c]::type_check::type_check
39: 0x7fcfb57a3a01 - rustc_borrowck[666edc9f3b3acd9c]::nll::compute_regions
40: 0x7fcfb565af5d - rustc_borrowck[666edc9f3b3acd9c]::do_mir_borrowck
41: 0x7fcfb564c4af - rustc_borrowck[666edc9f3b3acd9c]::mir_borrowck
42: 0x7fcfb59f0d0c - rustc_query_impl[6a0caddc8ed0f104]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[6a0caddc8ed0f104]::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 8usize]>>
43: 0x7fcfb5ae0bec - <rustc_query_impl[6a0caddc8ed0f104]::query_impl::mir_borrowck::dynamic_query::{closure#2} as core[ac22999f73547012]::ops::function::FnOnce<(rustc_middle[7d1bc35d0b9de3f6]::ty::context::TyCtxt, rustc_span[884527523b9bf0a3]::def_id::LocalDefId)>>::call_once
44: 0x7fcfb5c20cd9 - rustc_query_system[a49b8714d4c5fecc]::query::plumbing::try_execute_query::<rustc_query_impl[6a0caddc8ed0f104]::DynamicConfig<rustc_query_system[a49b8714d4c5fecc]::query::caches::VecCache<rustc_span[884527523b9bf0a3]::def_id::LocalDefId, rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt, true>
45: 0x7fcfb5af34dc - rustc_query_impl[6a0caddc8ed0f104]::query_impl::mir_borrowck::get_query_incr::__rust_end_short_backtrace
46: 0x7fcfb461291e - std[c2020301f641288b]::panicking::try::<(), core[ac22999f73547012]::panic::unwind_safe::AssertUnwindSafe<rustc_data_structures[57985108c29413d2]::sync::parallel::disabled::par_for_each_in<&[rustc_span[884527523b9bf0a3]::def_id::LocalDefId], <rustc_middle[7d1bc35d0b9de3f6]::hir::map::Map>::par_body_owners<rustc_interface[8e1741ee6322df64]::passes::analysis::{closure#1}::{closure#0}>::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>>
47: 0x7fcfb462388b - rustc_data_structures[57985108c29413d2]::sync::parallel::disabled::par_for_each_in::<&[rustc_span[884527523b9bf0a3]::def_id::LocalDefId], <rustc_middle[7d1bc35d0b9de3f6]::hir::map::Map>::par_body_owners<rustc_interface[8e1741ee6322df64]::passes::analysis::{closure#1}::{closure#0}>::{closure#0}>
48: 0x7fcfb460d156 - <rustc_session[526f1292888eebe7]::session::Session>::time::<(), rustc_interface[8e1741ee6322df64]::passes::analysis::{closure#1}>
49: 0x7fcfb45dc524 - rustc_interface[8e1741ee6322df64]::passes::analysis
50: 0x7fcfb59f442a - rustc_query_impl[6a0caddc8ed0f104]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[6a0caddc8ed0f104]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 1usize]>>
51: 0x7fcfb59a68c8 - <rustc_query_impl[6a0caddc8ed0f104]::query_impl::analysis::dynamic_query::{closure#2} as core[ac22999f73547012]::ops::function::FnOnce<(rustc_middle[7d1bc35d0b9de3f6]::ty::context::TyCtxt, ())>>::call_once
52: 0x7fcfb5bcd417 - rustc_query_system[a49b8714d4c5fecc]::query::plumbing::try_execute_query::<rustc_query_impl[6a0caddc8ed0f104]::DynamicConfig<rustc_query_system[a49b8714d4c5fecc]::query::caches::SingleCache<rustc_middle[7d1bc35d0b9de3f6]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[6a0caddc8ed0f104]::plumbing::QueryCtxt, true>
53: 0x7fcfb5aeda4e - rustc_query_impl[6a0caddc8ed0f104]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
54: 0x7fcfb43f9b71 - <rustc_middle[7d1bc35d0b9de3f6]::ty::context::GlobalCtxt>::enter::<rustc_driver_impl[931ba922f877923b]::run_compiler::{closure#0}::{closure#1}::{closure#3}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>
55: 0x7fcfb43f6d50 - rustc_span[884527523b9bf0a3]::create_session_globals_then::<core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>, rustc_interface[8e1741ee6322df64]::interface::run_compiler<core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>, rustc_driver_impl[931ba922f877923b]::run_compiler::{closure#0}>::{closure#0}>
56: 0x7fcfb443b6b0 - std[c2020301f641288b]::sys_common::backtrace::__rust_begin_short_backtrace::<rustc_interface[8e1741ee6322df64]::util::run_in_thread_with_globals<rustc_interface[8e1741ee6322df64]::interface::run_compiler<core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>, rustc_driver_impl[931ba922f877923b]::run_compiler::{closure#0}>::{closure#0}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>
57: 0x7fcfb44178c8 - <<std[c2020301f641288b]::thread::Builder>::spawn_unchecked_<rustc_interface[8e1741ee6322df64]::util::run_in_thread_with_globals<rustc_interface[8e1741ee6322df64]::interface::run_compiler<core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>, rustc_driver_impl[931ba922f877923b]::run_compiler::{closure#0}>::{closure#0}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ac22999f73547012]::result::Result<(), rustc_span[884527523b9bf0a3]::ErrorGuaranteed>>::{closure#1} as core[ac22999f73547012]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
58: 0x7fcfb3b497e5 - std::sys::pal::unix::thread::Thread::new::thread_start::h0e92b076ec0ac240
59: 0x7fcfb3951272 - start_thread
60: 0x7fcfb39ccdec - __GI___clone3
61: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.78.0 (9b00956e5 2024-04-29) (built from a source tarball) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<diesel::query_builder::select_statement::SelectStatement<diesel::query_builder::from_clause::FromClause<db::schema::users::table>, diesel::query_builder::select_clause::DefaultSelectClause<diesel::query_builder::from_clause::FromClause<db::schema::users::table>>, diesel::query_builder::distinct_clause::NoDistinctClause, diesel::query_builder::where_clause::WhereClause<diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db::schema::users::columns::id, diesel::expression::bound::Bound<diesel::sql_types::Text, &alloc::string::String>>>>> as diesel::query_builder::update_statement::target::IntoUpdateTarget>, polarity:Positive), bound_vars: [] } }`
#1 [mir_borrowck] borrow-checking `db::users::update`
#2 [analysis] running analysis passes on this crate
end of query stack
warning: `ordis` (bin "ordis") generated 7 warnings (run `cargo fix --bin "ordis"` to apply 2 suggestions)
error: could not compile `ordis` (bin "ordis"); 7 warnings emitted
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
nix-shell-env
Ordis <master>
❯ cargo run > trace.txt
Compiling ordis v0.2.0 (/home/flashgnash/Source/rust/Ordis)
warning: variant `user` should have an upper camel case name
--> src/gpt.rs:23:5
|
23 | user,
| ^^^^ help: convert the identifier to upper camel case (notice the capitalization): `User`
|
= note: `#[warn(non_camel_case_types)]` on by default
warning: variant `assistant` should have an upper camel case name
--> src/gpt.rs:24:5
|
24 | assistant,
| ^^^^^^^^^ help: convert the identifier to upper camel case: `Assistant`
warning: variant `system` should have an upper camel case name
--> src/gpt.rs:25:5
|
25 | system
| ^^^^^^ help: convert the identifier to upper camel case (notice the capitalization): `System`
warning: unused variable: `user_name`
--> src/main.rs:188:9
|
188 | let user_name = &author.name;
| ^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_user_name`
|
= note: `#[warn(unused_variables)]` on by default
warning: variable does not need to be mutable
--> src/common.rs:14:9
|
14 | let mut message = ctx.http.get_message(channel_id, message_id).await?;
| ----^^^^^^^
| |
| help: remove this `mut`
|
= note: `#[warn(unused_mut)]` on by default
warning: unused variable: `stat_block_thinking_message`
--> src/stat_puller.rs:195:9
|
195 | let stat_block_thinking_message = CreateReply::default()
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_stat_block_thinking_message`
warning: variable does not need to be mutable
--> src/gpt.rs:150:9
|
150 | let mut model = "gpt-4o-mini";
| ----^^^^^
| |
| help: remove this `mut`
thread 'rustc' panicked at compiler/rustc_middle/src/dep_graph/dep_node.rs:198:17:
Failed to extract DefId: def_kind 15bfbc7bb9f67908-3a980503a83c2f72
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: <rustc_query_system::dep_graph::dep_node::DepNode as rustc_middle::dep_graph::dep_node::DepNodeExt>::extract_def_id::{closure#0}
3: <rustc_middle::ty::context::TyCtxt>::def_path_hash_to_def_id
4: <rustc_query_system::dep_graph::dep_node::DepNode as rustc_middle::dep_graph::dep_node::DepNodeExt>::extract_def_id
5: <rustc_query_impl::plumbing::query_callback<rustc_query_impl::query_impl::def_kind::QueryType>::{closure#0} as core::ops::function::FnOnce<(rustc_middle::ty::context::TyCtxt, rustc_query_system::dep_graph::dep_node::DepNode)>>::call_once
6: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
7: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
8: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
9: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
10: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
11: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
12: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
13: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
14: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl::plumbing::QueryCtxt>
15: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::DefaultCache<rustc_type_ir::canonical::Canonical<rustc_middle::ty::context::TyCtxt, rustc_middle::ty::ParamEnvAnd<rustc_middle::traits::query::type_op::ProvePredicate>>, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
16: <rustc_middle::traits::query::type_op::ProvePredicate as rustc_trait_selection::traits::query::type_op::QueryTypeOp>::perform_query
17: <rustc_middle::traits::query::type_op::ProvePredicate as rustc_trait_selection::traits::query::type_op::QueryTypeOp>::fully_perform_into
18: <rustc_borrowck::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle::ty::ParamEnvAnd<rustc_middle::traits::query::type_op::ProvePredicate>>
19: <rustc_borrowck::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
20: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_constant
21: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_body
22: rustc_borrowck::type_check::type_check
23: rustc_borrowck::nll::compute_regions
24: rustc_borrowck::do_mir_borrowck
25: rustc_borrowck::mir_borrowck
[... omitted 2 frames ...]
26: std::panicking::try::<(), core::panic::unwind_safe::AssertUnwindSafe<rustc_data_structures::sync::parallel::disabled::par_for_each_in<&[rustc_span::def_id::LocalDefId], <rustc_middle::hir::map::Map>::par_body_owners<rustc_interface::passes::analysis::{closure#1}::{closure#0}>::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>>
27: rustc_data_structures::sync::parallel::disabled::par_for_each_in::<&[rustc_span::def_id::LocalDefId], <rustc_middle::hir::map::Map>::par_body_owners<rustc_interface::passes::analysis::{closure#1}::{closure#0}>::{closure#0}>
28: <rustc_session::session::Session>::time::<(), rustc_interface::passes::analysis::{closure#1}>
29: rustc_interface::passes::analysis
[... omitted 2 frames ...]
30: <rustc_middle::ty::context::GlobalCtxt>::enter::<rustc_driver_impl::run_compiler::{closure#0}::{closure#1}::{closure#3}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
31: rustc_span::create_session_globals_then::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#0}>
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.78.0 (9b00956e5 2024-04-29) (built from a source tarball) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<diesel::query_builder::select_statement::SelectStatement<diesel::query_builder::from_clause::FromClause<db::schema::users::table>, diesel::query_builder::select_clause::DefaultSelectClause<diesel::query_builder::from_clause::FromClause<db::schema::users::table>>, diesel::query_builder::distinct_clause::NoDistinctClause, diesel::query_builder::where_clause::WhereClause<diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db::schema::users::columns::id, diesel::expression::bound::Bound<diesel::sql_types::Text, &alloc::string::String>>>>> as diesel::query_builder::update_statement::target::IntoUpdateTarget>, polarity:Positive), bound_vars: [] } }`
#1 [mir_borrowck] borrow-checking `db::users::update`
#2 [analysis] running analysis passes on this crate
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 representability(3b4ce1e0f02f78db-243ddabf0b0fed5c)
#1 adt_sized_constraint(thread 'rustc' panicked at compiler/rustc_middle/src/dep_graph/dep_node.rs:198:17:
Failed to extract DefId: adt_sized_constraint 15bfbc7bb9f67908-3a980503a83c2f72
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: <rustc_query_system::dep_graph::dep_node::DepNode as rustc_middle::dep_graph::dep_node::DepNodeExt>::extract_def_id::{closure#0}
3: <rustc_middle::ty::context::TyCtxt>::def_path_hash_to_def_id
4: <rustc_query_system::dep_graph::dep_node::DepNode as rustc_middle::dep_graph::dep_node::DepNodeExt>::extract_def_id
5: rustc_interface::callbacks::dep_node_debug
6: <rustc_query_system::dep_graph::dep_node::DepNode as core::fmt::Debug>::fmt
7: core::fmt::write
8: <&std::io::stdio::Stderr as std::io::Write>::write_fmt
9: std::io::stdio::_eprint
10: rustc_query_system::dep_graph::graph::print_markframe_trace::<rustc_middle::dep_graph::DepsType>
11: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
12: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
13: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
14: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
15: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
16: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
17: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
18: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl::plumbing::QueryCtxt>
19: <rustc_query_system::dep_graph::graph::DepGraphData<rustc_middle::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl::plumbing::QueryCtxt>
20: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::DefaultCache<rustc_type_ir::canonical::Canonical<rustc_middle::ty::context::TyCtxt, rustc_middle::ty::ParamEnvAnd<rustc_middle::traits::query::type_op::ProvePredicate>>, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
21: <rustc_middle::traits::query::type_op::ProvePredicate as rustc_trait_selection::traits::query::type_op::QueryTypeOp>::perform_query
22: <rustc_middle::traits::query::type_op::ProvePredicate as rustc_trait_selection::traits::query::type_op::QueryTypeOp>::fully_perform_into
23: <rustc_borrowck::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle::ty::ParamEnvAnd<rustc_middle::traits::query::type_op::ProvePredicate>>
24: <rustc_borrowck::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
25: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_constant
26: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_body
27: rustc_borrowck::type_check::type_check
28: rustc_borrowck::nll::compute_regions
29: rustc_borrowck::do_mir_borrowck
30: rustc_borrowck::mir_borrowck
[... omitted 2 frames ...]
31: std::panicking::try::<(), core::panic::unwind_safe::AssertUnwindSafe<rustc_data_structures::sync::parallel::disabled::par_for_each_in<&[rustc_span::def_id::LocalDefId], <rustc_middle::hir::map::Map>::par_body_owners<rustc_interface::passes::analysis::{closure#1}::{closure#0}>::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>>
32: rustc_data_structures::sync::parallel::disabled::par_for_each_in::<&[rustc_span::def_id::LocalDefId], <rustc_middle::hir::map::Map>::par_body_owners<rustc_interface::passes::analysis::{closure#1}::{closure#0}>::{closure#0}>
33: <rustc_session::session::Session>::time::<(), rustc_interface::passes::analysis::{closure#1}>
34: rustc_interface::passes::analysis
[... omitted 2 frames ...]
35: <rustc_middle::ty::context::GlobalCtxt>::enter::<rustc_driver_impl::run_compiler::{closure#0}::{closure#1}::{closure#3}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
36: rustc_span::create_session_globals_then::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#0}>
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.78.0 (9b00956e5 2024-04-29) (built from a source tarball) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<diesel::query_builder::select_statement::SelectStatement<diesel::query_builder::from_clause::FromClause<db::schema::users::table>, diesel::query_builder::select_clause::DefaultSelectClause<diesel::query_builder::from_clause::FromClause<db::schema::users::table>>, diesel::query_builder::distinct_clause::NoDistinctClause, diesel::query_builder::where_clause::WhereClause<diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db::schema::users::columns::id, diesel::expression::bound::Bound<diesel::sql_types::Text, &alloc::string::String>>>>> as diesel::query_builder::update_statement::target::IntoUpdateTarget>, polarity:Positive), bound_vars: [] } }`
#1 [mir_borrowck] borrow-checking `db::users::update`
#2 [analysis] running analysis passes on this crate
end of query stack
warning: `ordis` (bin "ordis") generated 7 warnings (run `cargo fix --bin "ordis"` to apply 2 suggestions)
error: could not compile `ordis` (bin "ordis"); 7 warnings emitted
```
</p>
</details>
| I-ICE,T-compiler,A-incr-comp,C-bug,S-needs-repro | low | Critical |
2,524,731,467 | pytorch | When calculating the loss, the input data does not contain NaN, but the output contains NaN. | ### 🐛 Describe the bug
Please specify `cuda:0` at the very beginning.
```python
import torch
import numpy as np
import os
if "CONTEXT_DEVICE_TARGET" in os.environ and os.environ['CONTEXT_DEVICE_TARGET'] == 'GPU':
devices = os.environ['CUDA_VISIBLE_DEVICES'].split(",")
device = devices[-2]
final_device = "cuda:" + device
else:
final_device = 'cpu'
def loss_yolo_torch():
from yolov4.yolov4_pytorch import yolov4loss_torch
return yolov4loss_torch()
y_true_0 = np.load('./yolo_out[0][0].npy')
yolo_out1 = torch.from_numpy(y_true_0).to(final_device)
y_true_0 = np.load('./yolo_out[0][1].npy')
yolo_out2 = torch.from_numpy(y_true_0).to(final_device)
y_true_0 = np.load('./yolo_out[0][2].npy')
yolo_out3 = torch.from_numpy(y_true_0).to(final_device)
y_true_0 = np.load('./yolo_out[1][0].npy')
yolo_out4 = torch.from_numpy(y_true_0).to(final_device)
y_true_0 = np.load('./yolo_out[1][1].npy')
yolo_out5 = torch.from_numpy(y_true_0).to(final_device)
y_true_0 = np.load('./yolo_out[1][2].npy')
yolo_out6 = torch.from_numpy(y_true_0).to(final_device)
y_true_0 = np.load('./yolo_out[2][0].npy')
yolo_out7 = torch.from_numpy(y_true_0).to(final_device)
y_true_0 = np.load('./yolo_out[2][1].npy')
yolo_out8 = torch.from_numpy(y_true_0).to(final_device)
y_true_0 = np.load('./yolo_out[2][2].npy')
yolo_out9 = torch.from_numpy(y_true_0).to(final_device)
yolo_out = ((yolo_out1,yolo_out2,yolo_out3),(yolo_out4,yolo_out5,yolo_out6),(yolo_out7,yolo_out8,yolo_out9))
y_true_0 = np.load('./y_true_0.npy')
y_true_0 = torch.from_numpy(y_true_0).to(final_device)
y_true_1 = np.load('./y_true_1.npy')
y_true_1 = torch.from_numpy(y_true_1).to(final_device)
y_true_2 = np.load('./y_true_2.npy')
y_true_2 = torch.from_numpy(y_true_2).to(final_device)
gt_0 = np.load('./gt_0.npy')
gt_0 = torch.from_numpy(gt_0).to(final_device)
gt_1 = np.load('./gt_1.npy')
gt_1 = torch.from_numpy(gt_1).to(final_device)
gt_2 = np.load('./gt_2.npy')
gt_2 = torch.from_numpy(gt_2).to(final_device)
input_shape_t = np.load('./input_shape_t.npy')
input_shape_t = torch.from_numpy(input_shape_t).to(final_device)
loss_torch = loss_yolo_torch()
loss_torch_result = loss_torch(yolo_out, y_true_0, y_true_1, y_true_2, gt_0, gt_1, gt_2, input_shape_t)
yolo_out1 = torch.isnan(yolo_out1).any()
print('yolo_out1;',yolo_out1)
yolo_out2 = torch.isnan(yolo_out2).any()
print('yolo_out2;',yolo_out2)
yolo_out3 = torch.isnan(yolo_out3).any()
print('yolo_out3;',yolo_out3)
yolo_out4 = torch.isnan(yolo_out4).any()
print('yolo_out4;',yolo_out4)
yolo_out5 = torch.isnan(yolo_out5).any()
print('yolo_out5;',yolo_out5)
yolo_out6 = torch.isnan(yolo_out6).any()
print('yolo_out6;',yolo_out6)
yolo_out7 = torch.isnan(yolo_out7).any()
print('yolo_out7;',yolo_out7)
yolo_out8 = torch.isnan(yolo_out8).any()
print('yolo_out8;',yolo_out8)
yolo_out9 = torch.isnan(yolo_out9).any()
print('yolo_out9;',yolo_out9)
y_true_0 = torch.isnan(y_true_0).any()
print('y_true_0;',y_true_0)
y_true_1 = torch.isnan(y_true_1).any()
print('y_true_1;',y_true_1)
y_true_2 = torch.isnan(y_true_2).any()
print('y_true_2;',y_true_2)
gt_0 = torch.isnan(gt_0).any()
print('gt_0;',gt_0)
gt_1 = torch.isnan(gt_1).any()
print('gt_1;',gt_1)
gt_2 = torch.isnan(gt_2).any()
print('gt_2;',gt_2)
input_shape_t = torch.isnan(input_shape_t).any()
print('input_shape_t;',input_shape_t)
loss_torch_result = torch.isnan(loss_torch_result).any()
print('loss_torch_result;',loss_torch_result)
```

### Versions
Code and data links:https://drive.google.com/file/d/1MT_0Tq1koITqg9pnn-po1zVE8bDCxWvj/view?usp=sharing | triaged,module: NaNs and Infs | low | Critical |
2,524,836,488 | godot | Multiply blend darker in Mobile renderer. | ### Tested versions
- Reproducible in: 4.0.stable, v4.0.4.stable, 4.1.stable, 4.1.4.stable, 4.2.stable, 4.3.stable, 4.4.dev2
### System information
Godot v4.1.4.stable - Nobara Linux 40 (KDE Plasma) - X11 - Vulkan (Mobile) - dedicated AMD Radeon RX 6600 XT (RADV NAVI23) () - AMD Ryzen 7 3700X 8-Core Processor (16 Threads)
### Issue description
I use Forward+ for my project and I use a gradient texture set to Multiply to fake shadows, it works as expected in Forward+, but upon compiling for mobile I noticed the shadows looked off.
The only possible workaround is using Compatibility for mobile projects, but that doesn't allow using the features provided in Mobile renderer.
v.4.0.stable

v.4.3.stable

v4.4.dev2

Removing the gradient, since I thought it might be related to the textures, but it is still dark.

Disabling ambient light, fog, depth draw and shadows doesn't change anything either
This is how it looks in Forward+ and Compatibility, using v4.4.dev2


Let me know if I missed anything or made any mistakes as this is my first issue, thank you.
### Steps to reproduce
Create a new MeshInstance3D.
Give it a mesh, Quad or Plane recommended.
Give it a new StandardMaterial3D or ORMMaterial3D.
Set Blend Mode to Multiply.
Set Shading Mode to Unshaded.
Switch renderer to Mobile and compare to Forward+ and Compatibility. It will appear darker in Mobile.
You can also reproduce it using a very basic ShaderMaterial with the following code
```glsl
shader_type spatial;
render_mode blend_mul, unshaded;
uniform vec4 albedo : source_color;
uniform sampler2D texture_albedo : source_color, filter_linear_mipmap, repeat_enable;
void fragment() {
vec4 albedo_tex = texture(texture_albedo, UV);
ALBEDO = albedo.rgb * albedo_tex.rgb;
}
````
### Minimal reproduction project (MRP)
Open node_3d.tscn
[BlendTest.zip](https://github.com/user-attachments/files/16994269/BlendTest.zip)
| topic:rendering,documentation,topic:3d | low | Minor |
2,524,922,649 | PowerToys | Mouse without Boarders advanced multi monitor non-grid layout control | ### Description of the new feature / enhancement
Mouse without boarders should be able to support non-grid layouts of multi monitors between hosts. Users may not have grid like layouts of their actual monitors.
### Scenario when this would be used?
I have the following [display set up](https://imgur.com/a/98YmSzX)
- Orange = Laptop monitors (Windows11)
- Blue = Computer monitors (Windows10)
I have Mouse without boarders (MWB) set up and it works, but getting from monitor to monitor is not working the way I want.
I want the following navigation points
- Laptop Orange 2 right side connect to the left of Computer monitor 2.
- Laptop Orange 2 top connect to the bottom Computer Monitor 1.
But this does not work. Currently it works as the following.
- Laptop Orange 2 right side connects to left of Computer monitor 1.
- Laptop Orange 1 left side connects to right of Computer monitor 2.
There should be a way to layout every MONITOR of each host. The current grid layout only works well when each host has only one monitor. Or when each host with multi monitor are all next to each other or all above or below each other. In the real world most people do not have this layout.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,524,924,975 | PowerToys | Allow windows to snap to fancy layouts when moving windows using Alt+Space+M | ### Description of the new feature / enhancement
Natively in Windows, Alt+Space opens the right click dropdown menu in the top left corner of the currently active window. You can then use those menu items to resize or move a window entirely from the keyboard. It would be nice if powertoys recognized this action and allowed you to drop windows into fancy layouts while manipulating a window strictly using the keyboard.
### Scenario when this would be used?
This would be used any time when a user has a fancy layout but is not using the mouse or keypad for some reason.
### Supporting information
NA | Needs-Triage | low | Minor |
2,524,925,711 | godot | Godot 3.5/3.6 Freezes with GLES3 backend on AMD GPU under Ubuntu | ### Tested versions
Reproducible in
- 3.6.stable.mono
- 3.5.stable.mono
- 3.5.stable
### System information
Xubuntu 24.04.1 LTS - Godot 3.6.stable.mono - OpenGL ES 3.0 Renderer: AMD Radeon Graphics (radeonsi, gfx1103_r1, LLVM 17.0.6, DRM 3.57, 6.8.0-40-generic)
### Issue description
When using Godot editor or games that run on GLES3 backend, there is random freezes of all the system. Theses freezes last 3-4 seconds and seem to be limited to the graphics because after resuming, all actions did while the freezes will be executed (like unminimize window).
Here the syslog when that occurs:
```
2024-09-13T15:31:05.662265+02:00 darkbook kernel: amdgpu 0000:c1:00.0: amdgpu: in page starting at address 0x0000000000000000 from client 10
2024-09-13T15:31:05.662266+02:00 darkbook kernel: amdgpu 0000:c1:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00301430
2024-09-13T15:31:05.662266+02:00 darkbook kernel: amdgpu 0000:c1:00.0: amdgpu: Faulty UTCL2 client ID: SQC (data) (0xa)
2024-09-13T15:31:05.662267+02:00 darkbook kernel: amdgpu 0000:c1:00.0: amdgpu: MORE_FAULTS: 0x0
2024-09-13T15:31:05.662268+02:00 darkbook kernel: amdgpu 0000:c1:00.0: amdgpu: WALKER_ERROR: 0x0
2024-09-13T15:31:05.662268+02:00 darkbook kernel: amdgpu 0000:c1:00.0: amdgpu: PERMISSION_FAULTS: 0x3
2024-09-13T15:31:05.662269+02:00 darkbook kernel: amdgpu 0000:c1:00.0: amdgpu: MAPPING_ERROR: 0x0
2024-09-13T15:31:05.662270+02:00 darkbook kernel: amdgpu 0000:c1:00.0: amdgpu: RW: 0x0
2024-09-13T15:31:16.122394+02:00 darkbook kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx_0.0.0 timeout, but soft recovered
```
Here the differents backend I test:
- ❗️ OpenGL ES 3.0 Renderer: AMD Radeon Graphics (radeonsi, gfx1103_r1, LLVM 17.0.6, DRM 3.57, 6.8.0-40-generic)
- ✅ OpenGL ES 2.0 Renderer: AMD Radeon Graphics (radeonsi, gfx1103_r1, LLVM 17.0.6, DRM 3.57, 6.8.0-40-generic)
- ✅ Vulkan API 1.3.274: AMD Radeon Graphics (RADV GFX1103_R1)
So the freezes seem to only occur on OpenGL ES 3.0 Renderer.
Note I haven't problems with others non Godot apps.
Thanks in advance for the help and don't hesitate to ask me for further informations.
(sorry for my bad english, I am french :joy: )
### Steps to reproduce
- Launch Godot 3.5/3.6
- Create a project with GLES3 backend
- Do some things into editor (add sprite, create node, ...). Freezes are random and can occurs after 10-15 seconds.
- When freezes, you can do nothings. All the environment are freezes (even other non GLES3 apps, and OS).
### Minimal reproduction project (MRP)
N/A | bug,topic:rendering,needs testing | low | Critical |
2,524,960,298 | svelte | Collapse inlined variables into templates | ### Describe the problem
In https://github.com/sveltejs/svelte/pull/13075 we generate code like:
```
const x = "foo";
$.template(`hello ${x}`);
```
It would be nicer if this were simply:
```
$.template(`hello foo`);
```
### Describe the proposed solution
Ideally this is implemented by esbuild (https://github.com/evanw/esbuild/issues/3570) and Oxc (https://github.com/oxc-project/oxc/issues/2646). Failing that, we may have to implement it ourselves
### Importance
nice to have | perf | low | Minor |
2,525,031,612 | pytorch | opcheck doesn't handle Tensors with esoteric dtypes | In particular, torch.testing.assert_close doesn't handle Float8_e4m3fn. Our options are to work around it in opcheck or get torch.testing.assert_close to handle Float8_e4m3fn
cc @ezyang @chauhang @penguinwu @yanbing-j @vkuzo @albanD @kadeng @bdhirsh | triaged,module: custom-operators,oncall: pt2,module: float8,module: opcheck,module: pt2-dispatcher | low | Minor |
2,525,070,910 | rust | stack overflow in ImproperCTypesVisitor::{check_type_for_ffi, check_variant_for_ffi} | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
use std::marker::PhantomData;
#[repr(C)]
struct A<T> {
a: *const A<A<T>>,
p: PhantomData<T>,
}
extern "C" {
fn f(a: *const A<()>);
}
fn main() {}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (adaff5368 2024-09-12)
binary: rustc
commit-hash: adaff5368b0c7b328a0320a218751d65ab1bba97
commit-date: 2024-09-12
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
### Error output
```
error: rustc interrupted by SIGSEGV, printing backtrace
...
note: rustc unexpectedly overflowed its stack! this is a bug
note: maximum backtrace depth reached, frames may have been lost
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=16777216
note: backtrace dumped due to SIGSEGV! resuming signal
Segmentation fault
```
<details><summary><strong>Backtrace</strong></summary>
<p>
```
0x00007ffff61a67d4 in <rustc_middle::ty::context::CtxtInterners>::intern_ty () from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
(gdb) bt
#0 0x00007ffff61a67d4 in <rustc_middle::ty::context::CtxtInterners>::intern_ty () from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#1 0x00007ffff625be58 in <rustc_middle::ty::Ty as rustc_type_ir::fold::TypeSuperFoldable<rustc_middle::ty::context::TyCtxt>>::super_fold_with::<rustc_type_ir::binder::ArgFolder<rustc_middle::ty::context::TyCtxt>> () from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#2 0x00007ffff639b7e0 in <&rustc_middle::ty::list::RawList<(), rustc_middle::ty::generic_args::GenericArg> as rustc_type_ir::fold::TypeFoldable<rustc_middle::ty::context::TyCtxt>>::try_fold_with::<rustc_type_ir::binder::ArgFolder<rustc_middle::ty::context::TyCtxt>> ()
from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#3 0x00007ffff625bb48 in <rustc_middle::ty::Ty as rustc_type_ir::fold::TypeSuperFoldable<rustc_middle::ty::context::TyCtxt>>::super_fold_with::<rustc_type_ir::binder::ArgFolder<rustc_middle::ty::context::TyCtxt>> () from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#4 0x00007ffff625bc31 in <rustc_middle::ty::Ty as rustc_type_ir::fold::TypeSuperFoldable<rustc_middle::ty::context::TyCtxt>>::super_fold_with::<rustc_type_ir::binder::ArgFolder<rustc_middle::ty::context::TyCtxt>> () from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#5 0x00007ffff6275014 in <rustc_middle::ty::FieldDef>::ty () from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#6 0x00007ffff50ae576 in <rustc_lint::types::ImproperCTypesVisitor>::check_variant_for_ffi ()
from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#7 0x00007ffff50af1bf in <rustc_lint::types::ImproperCTypesVisitor>::check_type_for_ffi ()
from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#8 0x00007ffff50ae5bc in <rustc_lint::types::ImproperCTypesVisitor>::check_variant_for_ffi ()
from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#9 0x00007ffff50af1bf in <rustc_lint::types::ImproperCTypesVisitor>::check_type_for_ffi ()
from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#10 0x00007ffff50ae5bc in <rustc_lint::types::ImproperCTypesVisitor>::check_variant_for_ffi ()
from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#11 0x00007ffff50af1bf in <rustc_lint::types::ImproperCTypesVisitor>::check_type_for_ffi ()
from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#12 0x00007ffff50ae5bc in <rustc_lint::types::ImproperCTypesVisitor>::check_variant_for_ffi ()
from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
#13 0x00007ffff50af1bf in <rustc_lint::types::ImproperCTypesVisitor>::check_type_for_ffi ()
from /home/tm/.rustup/toolchains/stage1/lib/librustc_driver-14c009390c37828b.so
...
```
</p>
</details>
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"gurry"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | I-crash,A-lints,T-compiler,C-bug,S-bug-has-test,L-improper_ctypes | low | Critical |
2,525,115,976 | pytorch | If mark_dynamic fails (Not all values of RelaxedUnspecConstraint are valid) due to specialization, error message should print where specialization came from | ### 🐛 Describe the bug
Self explanatory
internal xref: https://fb.workplace.com/groups/1075192433118967/posts/1502206057084267
### Versions
main
cc @chauhang @penguinwu | good first issue,triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,525,117,710 | flutter | License checker should check that all code files outside of third party have copyright header | Discovered as part of https://github.com/flutter/engine/pull/55155 | c: new feature,P2,team-engine,triaged-engine | low | Minor |
2,525,134,765 | TypeScript | super() typed as returning void | ### 💻 Code
```
class Parent {}
class Child extends Parent {
constructor() {
super().func();
}
func() {
}
}
```
### 🙁 Actual behavior
super() in constructor return the object but TS assumes it returns void:

### 🙂 Expected behavior
super() should return the object. | Suggestion,Awaiting More Feedback | low | Minor |
2,525,140,584 | pytorch | [MPS] Inconsistent performance issues | ### 🐛 Describe the bug
In some cases certain ops are slow. The failure mode does not always trigger, but its relatively easy to trigger this behavior if run in rapid succession.
Take the following SDPA benchmark as an example
```python
import torch
from torch.profiler import ProfilerActivity, profile
device = "mps"
dtype = torch.bfloat16
n = 19
shape = [8, 64, 1024, 1024]
query = torch.rand(shape, device=device, dtype=dtype)
key = torch.rand(shape, device=device, dtype=dtype)
value = torch.rand(shape, device=device, dtype=dtype)
with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
for _ in range(n):
torch.nn.functional.scaled_dot_product_attention(query, key, value)
print(
prof.key_averages(group_by_input_shape=True).table(
sort_by="cpu_time_total", row_limit=10
)
)
prof.export_chrome_trace(f"trace_test_{device}.json")
```
If we run this multiple times we get very different output in terms of performance characteristics.
### Output 1
```
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
aten::scaled_dot_product_attention 0.01% 221.958us 100.00% 1.536s 80.858ms 19 [[8, 64, 1024, 1024], [8, 64, 1024, 1024], [8, 64, 1024, 1024], [], [], [], [],
aten::_scaled_dot_product_attention_math_for_mps 99.97% 1.536s 99.99% 1.536s 80.847ms 19 [[8, 64, 1024, 1024], [8, 64, 1024, 1024], [8, 64, 1024, 1024], [], [], [], [],
aten::empty 0.01% 200.878us 0.01% 200.878us 5.286us 38 [[], [], [], [], [], []]
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
Self CPU time total: 1.536s
```
### Output 2
```
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
aten::scaled_dot_product_attention 0.79% 23.418us 100.00% 2.975ms 156.570us 19 [[8, 64, 1024, 1024], [8, 64, 1024, 1024], [8, 64, 1024, 1024], [], [], [], [],
aten::_scaled_dot_product_attention_math_for_mps 98.38% 2.927ms 99.21% 2.951ms 155.338us 19 [[8, 64, 1024, 1024], [8, 64, 1024, 1024], [8, 64, 1024, 1024], [], [], [], [],
aten::empty 0.83% 24.837us 0.83% 24.837us 0.654us 38 [[], [], [], [], [], []]
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
Self CPU time total: 2.975ms
```
### Output 3
```
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
aten::scaled_dot_product_attention 0.01% 146.001us 100.00% 1.062s 55.883ms 19 [[8, 64, 1024, 1024], [8, 64, 1024, 1024], [8, 64, 1024, 1024], [], [], [], [],
aten::_scaled_dot_product_attention_math_for_mps 99.97% 1.061s 99.99% 1.062s 55.875ms 19 [[8, 64, 1024, 1024], [8, 64, 1024, 1024], [8, 64, 1024, 1024], [], [], [], [],
aten::empty 0.01% 126.417us 0.01% 126.417us 3.327us 38 [[], [], [], [], [], []]
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
Self CPU time total: 1.062s
```
### Output 4
```
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
aten::scaled_dot_product_attention 0.01% 233.747us 100.00% 1.923s 101.205ms 19 [[8, 64, 1024, 1024], [8, 64, 1024, 1024], [8, 64, 1024, 1024], [], [], [], [],
aten::_scaled_dot_product_attention_math_for_mps 99.98% 1.922s 99.99% 1.923s 101.193ms 19 [[8, 64, 1024, 1024], [8, 64, 1024, 1024], [8, 64, 1024, 1024], [], [], [], [],
aten::empty 0.01% 205.504us 0.01% 205.504us 5.408us 38 [[], [], [], [], [], []]
---------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------------------------------------------------
Self CPU time total: 1.923s
```
In the above examples the total time to run 19 iterations of SDPA varies from 2.975ms to 1.923s. That's a 2-3 orders of magnitude difference in terms of performance.
`n` was set relatively low on purpose. The higher it is the more likely you are to see this issue.
My gut feeling is that this is not necessarily related to the SDPA code itself. Possibly related to #124850.
xref #135778
### Versions
PyTorch version: 2.5.0a0+gitdd47f6f
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.1
Libc version: N/A
Python version: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:13:44) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-14.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] optree==0.12.1
[pip3] torch==2.5.0a0+gitdd47f6f
[pip3] torch-tb-profiler==0.4.3
[pip3] torchvision==0.20.0a0+0d80848
[pip3] triton==3.0.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] torch 2.5.0a0+gitdd47f6f dev_0 <develop>
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchvision 0.20.0a0+0d80848 dev_0 <develop>
[conda] triton 3.0.0 pypi_0 pypi
cc @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: performance,triaged,module: mps | low | Critical |
2,525,148,573 | go | encoding/json: Decoder.Token does not return an error for incomplete JSON | When dealing with incomplete JSON, the `(*Decoder).Token` method returns `io.EOF` at the end of the input instead of providing an appropriate error. For example, the following code does not produce an error:
https://go.dev/play/p/HHEwVkRCs7a
```go
dec := json.NewDecoder(strings.NewReader(`[123,`))
for {
_, err := dec.Token()
if err == io.EOF {
break
}
if err != nil {
panic(err)
}
}
```
According to the documentation, **Token guarantees that the delimiters [ ] { } it returns are properly nested and matched**. However, in this example, `[` is not properly nested and matched. | NeedsInvestigation | low | Critical |
2,525,148,762 | godot | Sizing artifacts for CollisionShape2Ds | ### Tested versions
- Reproducable in Godot 4.3
- Not reproducable in previous versions _(EDIT: I was incorrect, this appears to be reproducable in previous 4.x versions, tested upto 4.2.1)_
### System information
Windows 11 - Godot v4.3 - Forward+ Renderer - Intel UHD Graphics 630
### Issue description

When changing the size of a CollisionShape2D, there appears to be some scaling artifacts that glitches the visual of the shape. The size is correct in the inspector but incorrect in the editor so it appears to be a visual bug. Collisions also seem to work fine.
### Steps to reproduce
1. Add a CollisionShape2D into the scene.
2. Add a new shape Resource then try changing the size of the shape. | bug,topic:editor,topic:physics,regression | low | Critical |
2,525,148,866 | node | Support clearing Node.js cache for package.json | ### What is the problem this feature will solve?
When a long-running Node.js process is active and a package.json file is modified (for example, by adding "type": "module"), Node.js does not detect this change. It continues treating the package as CommonJS, which results in an error like Unexpected token 'export' when attempting to use ES modules.
This is crucial for us at [bit](https://github.com/teambit/bit), where we run a background process called “bit server” that manages the local development workspace. Among its tasks is creating new web components, which initially have a package.json without "type": "module". If the component is later determined to be ESM, the package.json is updated accordingly. (Explaining why this can’t be handled differently is beyond the scope of this feature request).
### What is the feature you are proposing to solve the problem?
I propose adding an API that allows clearing the cached information related to the package.json of a specific module (or optionally clearing the cache for all modules). Either option would solve the issue.
### What alternatives have you considered?
Restarting the Node.js process, but this results in a poor user experience. | feature request,loaders | low | Critical |
2,525,181,406 | vscode | Broken hover and tooltip behavior for OSC 8 hyperlinks | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
## Version Info
Version: 1.93.1 (Universal)
Commit: 38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40
Date: 2024-09-11T17:20:05.685Z
Electron: 30.4.0
ElectronBuildId: 10073054
Chromium: 124.0.6367.243
Node.js: 20.15.1
V8: 12.4.254.20-electron.0
OS: Darwin arm64 23.6.0
## Steps to Reproduce:
1. Show some content in an integrated terminal that contains OSC 8 hyperlinks (the ones with dashed underlines).
2. When hovering over a hyperlink and quickly moving the mouse away, the tooltip is triggered even when the tooltip is no longer hovered, and the tooltip remains triggered when scrolling. The only way to close the tooltip is to hover over the tooltip/hyperlink area and hover away.
https://github.com/user-attachments/assets/223172f7-e03b-4526-98db-c478b84279b9
3. When hovering over a hyperlink long enough to trigger the tooltip, the tooltip remains triggered when scrolling until hovering away.
https://github.com/user-attachments/assets/0d3e6deb-97c6-46d5-9e01-8d866e123d41
NOTE: This issue is only present for OSC 8 hyperlinks, not terminal links that are provided natively (like filename/directory links) or through a `TerminalLinkProvider `. | bug,terminal-links | low | Critical |
2,525,190,284 | godot | Gizmo in CSGMesh3D is not updated when scaling the node in the 3D editor viewport | ### Tested versions
4.3 stable, 4.4 dev2
### System information
Godot v4.4.dev2 - Windows 10.0.19045 - Vulkan (Forward+) - dedicated Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 Threads)
### Issue description
I noticed two problems when using CSGMesh3D
1. AABB does not update (blue border) when resizing with the scale tool in the top bar

3. When selecting CSGMesh3D, the Gizmo struggles for depth, this is especially noticeable when moving the camera (**Fixed**)

### Steps to reproduce
1. Open MRP
2. Select the biggest cube (BIGBOX)
3. Move the camera, you will notice artifacts on the cube (fight for depth)
4. Reduce the cube, AABB will not update until we select another node
### Minimal reproduction project (MRP)
[vox.zip](https://github.com/user-attachments/files/16996585/vox.zip)
| bug,topic:rendering,topic:editor,topic:3d | low | Minor |
2,525,201,703 | vscode | Python Coverage Compatability Requests | Hi @connor4312! We have begun implementation of test coverage for python and had a few items we wanted to surface in terms of the UI. Wanted to start an issue so @brettcannon can contribute his thoughts! Thanks | feature-request,under-discussion,testing | low | Minor |
2,525,206,387 | langchain | CSVLoader does not raise `ValueError` for missing metadata columns | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
## Steps to Reproduce
1. Create a CSV file (e.g., `demo_bug.csv`):
```csv
HEADER1, HEADER2, HEADER3
data1, data2, data3
data4, data5, data6
```
2. Use the following Python code to load the CSV:
```python
from langchain_community.document_loaders import CSVLoader
loader = CSVLoader(
file_path="./demo_bug.csv",
metadata_columns=("MISSING_HEADER", "HEADER1", "HEADER2", "HEADER3"),
)
loader.load()
```
3. You will get the following traceback:
```python
Traceback (most recent call last):
File "bugCSV.py", line 8, in <module>
loader.load()
File "base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "csv_loader.py", line 147, in lazy_load
raise RuntimeError(f"Error loading {self.file_path}") from e
RuntimeError: Error loading ./demo_bug.csv
```
## Expected Behavior
When a metadata column specified in `metadata_columns` does not exist in the CSV file, I expected the loader to raise a `ValueError` with a message like:
```python
ValueError: Metadata column 'MISSING_HEADER' not found in CSV file.
```
Instead, the current implementation raises a generic RuntimeError, making it harder to debug the specific cause of the issue.
### Error Message and Stack Trace (if applicable)
_No response_
## Description
In the current implementation of `CSVLoader` within `langchain_community.document_loaders.csv_loader`, a generic `RuntimeError` is raised when an error occurs while loading the CSV file, even when the underlying issue is due to missing metadata columns. This masks the actual problem, making debugging more difficult for users.
Specifically, when a column specified in the `metadata_columns` parameter is not present in the CSV file, a more appropriate `ValueError` should be raised, indicating the missing column. However, due to broad exception handling in the `lazy_load()` method, this specific error is hidden behind a `RuntimeError`.
## Expected Behavior
When a metadata column specified by the user is missing from the CSV file, the loader should raise a `ValueError`, providing a clear message about the missing column, instead of the generic `RuntimeError`.
## Actual Behavior
A generic `RuntimeError` is raised, which does not specify that the issue stems from a missing column in the CSV file. This makes it difficult for users to identify the root cause of the problem.
## Proposed Solution
The error handling in the `lazy_load()` method should be adjusted to allow more specific exceptions, such as `ValueError`, to propagate. This will ensure that the appropriate error is raised and presented to the user when metadata columns are missing.
the appropriate error is raised and presented to the user when metadata columns are missing.
```python
def lazy_load(self) -> Iterator[Document]:
try:
with open(self.file_path, newline="", encoding=self.encoding) as csvfile:
yield from self.__read_file(csvfile)
except UnicodeDecodeError as e:
if self.autodetect_encoding:
detected_encodings = detect_file_encodings(self.file_path)
for encoding in detected_encodings:
try:
with open(
self.file_path, newline="", encoding=encoding.encoding
) as csvfile:
yield from self.__read_file(csvfile)
break
except UnicodeDecodeError:
continue
else:
raise RuntimeError(f"Error loading {self.file_path}") from e
except ValueError as ve: # Allow ValueError to propagate
raise ve
except Exception as e:
raise RuntimeError(f"Error loading {self.file_path}") from e
```
### System Info
## Environment
- **LangChain Version**:
- `langchain==0.2.12`
- `langchain-community==0.2.11`
- `langchain-core==0.2.38`
- `langchain-text-splitters==0.2.2`
- `langchain-unstructured==0.1.2`
- **Python Version**: `3.12.3`
| 🤖:bug | low | Critical |
2,525,233,289 | godot | Updating Dictionary keys in a Tool script cause it to display wrong keys in editor | ### Tested versions
Tested versions: 4.3 stable
### System information
Windows 10 - Godot v4.0.3.stable - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 (nvidia, 556.12) - Ryzen 5 5600x
### Issue description
I'm making a dialogue system, which uses Nodes that have a `@tool` script attached to it.
Based on event type, I'm updating a keys in a Dictionary that's inside an Array, and when doing so, it seems like only size + 1 keys are being added, but keys prior to that are not changed in the editor (but changed in the process)
### Steps to reproduce
### Tested versions
Tested versions: 4.3 stable
### System information
Windows 10 - Godot v4.0.3.stable - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 (nvidia, 556.12) - Ryzen 5 5600x
### Issue description
I'm making a dialogue system, which uses Nodes that have a @tool script attached to it.
Based on event type, I'm updating a keys in a Dictionary that's inside an Array, and when doing so, it seems like only size + 1 keys are being added, but keys prior to that are not changed in the editor (but changed in the process)
### Steps to reproduce
Create a tool script, which exports an Array variable, which has a Dictionary inside.
Make it so if a bool in the Dictionary changes, modify a key/value in the Dictionary.
Also print_debug when the Dictionary changes.
Notice that the Dictionary keys will mismatch from the print_debug version.
You will also be unable to change the value of the key that's faulty, until you reload the Node that the script is attached to.
```
@tool
extends Node
var dialogue: Array = [{
"isMessage": true,
"message": ""
}]
func _get_property_list() -> Array:
var ret: Array = []
ret.append({
"name": "Messages",
"type": TYPE_ARRAY,
"hint": PROPERTY_HINT_ARRAY_TYPE,
"hint_string": ""
})
return ret
func _set(prop_name: StringName, val) -> bool:
match prop_name:
"Messages":
dialogue = val
_update_fields()
return true
_:
return false
func _get(prop_name: StringName):
match prop_name:
"Messages":
return dialogue
_:
return null
func _update_fields():
for index in range(dialogue.size()):
var item = dialogue[index]
if item == null:
dialogue[index] = {
"isMessage": true,
"message": ""
}
else:
if item.has("isMessage"):
if item["isMessage"]:
# When isMessage is true, clear message and ensure no event/params
item["message"] = ""
item.erase("event")
item.erase("params")
else:
# When isMessage is false, ensure event and params are set
if not item.has("event"):
item["event"] = ""
if not item.has("params"):
item["params"] = [""]
# Remove message if it exists
item.erase("message")
print_debug(dialogue)
```
### Minimal reproduction project (MRP)
https://github.com/SweetRazory/godot-tool-dictionary-bug
### Minimal reproduction project (MRP)
https://github.com/SweetRazory/godot-tool-dictionary-bug
[godot-tool-dictionary-bug.zip](https://github.com/user-attachments/files/16996944/godot-tool-dictionary-bug.zip)
| bug,topic:gdscript,topic:editor | low | Critical |
2,525,234,356 | electron | [Feature Request]: HIDDevice Object should contain `usagePage` and `usage` properties | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
When implementing the [setDevicePermissionHandler(handler)](https://www.electronjs.org/docs/latest/api/session#sessetdevicepermissionhandlerhandler) we could not filter devices based on `usagePage`. As we would not care about the vendor or product in our app we can't preconnect the device based on usage.
### Proposed Solution
Add `usagePage` and `usage` properties to device information, just like it can be read in the browser when a device is connected.
[requestDevice()](https://wicg.github.io/webhid/#dom-hid-requestdevice) has these properties in it's [HIDDeviceFilter](https://wicg.github.io/webhid/#dom-hiddevicefilter) as well.
### Alternatives Considered
No idea.
### Additional Information
_No response_ | enhancement :sparkles:,web-platform-apis | low | Minor |
2,525,254,180 | stable-diffusion-webui | [Feature Request]: Improved pip version and system setup | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
on python 12 torch is not found try pipx
### Proposed workflow
sudo pacman -S python-pipx
pipx install torch
### Additional information
_No response_ | enhancement | low | Minor |
2,525,263,467 | vscode | vscode.dev blocks simple browser | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
- OS Version:
Steps to Reproduce:
1. Enter to vscode.dev
2. try to use simplebrowser
3. It Blocks urls


The expected behavior should be to load the webpage, this to be able to hot reloading inside vscode.dev as the same as I can in local vscode. | bug,web | low | Critical |
2,525,267,922 | flutter | TextField blinks when hoverColor == filledColor on displays with refresh rates higher than 60Hz | ### Steps to reproduce
1. Draw some TextField and add next params to decoration property:
decoration: InputDecoration(
fillColor: Colors.blue,
hoverColor: Colors.blue,
filled: true,
),
2. Run on 100hz display.
3. Hover mouse over TextField.
### Expected results
TextField does not change background color when hovered and behaves simillary on different refresh rates.
### Actual results
TextField changes background color on hover for 100hz display and behaves normally for 60hz.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: Scaffold(
body: Center(
child: TextField(
decoration: InputDecoration(
fillColor: Colors.blue,
hoverColor: Colors.blue,
filled: true,
),
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/b8efc8e9-2fd6-451f-9a92-0ea9befb3397
https://github.com/user-attachments/assets/ab35ba06-3572-47da-b63a-317e287a029a
</details>
### Logs
<details open><summary>Logs</summary>
```console
no logs messages
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale ru-BY)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] VS Code (version 1.93.0)
[✓] Connected device (3 available)
! Error: Browsing on the local area network for iPhone Арина. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• No issues found!
```
</details>
| a: text input,framework,f: material design,platform-mac,platform-web,a: desktop,P2,fyi-text-input,team-macos,triaged-macos | low | Critical |
2,525,303,055 | PowerToys | The workspace fails to function properly and causes the Antimalware Service Executable process to have a high CPU footprint on startup | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Start a workspace,The Antimalware Service Executable process then takes up half the cpu usage and the window for workspace configuration is not open.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Workspaces | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.