id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,503,076,434 | next.js | Changing search params on the same page does not reset the not found boundary | ### Link to the code that reproduces this issue
https://github.com/ali-idrizi/next-not-found-search-params-reproduction
### To Reproduce
This is a very minimal reproduction. In the `/about` page if the `q` search param is `"404"`, then `notFound` is called.
The layout has three buttons that call `router.push`, one to `/`, one to `/about` and the last to `/about?q=404`. Clicking the last button correctly shows the not found page. However, afterwards clicking `/about` no longer resets it. The 404 page goes away only after navigating to an entirely different page, or hard reloading.
In the network tab, I can see that the request for RSC payload is being sent, and it does not contain the `NEXT_NOT_FOUND` error, but the client fails to update the content.
https://github.com/user-attachments/assets/5e417703-8f70-4c35-9495-20e90318adbe
### Current vs. Expected behavior
Once `notFound` has been called on `/about?q=404`, clicking the `/about` button should correctly render the page, but instead it keeps showing the not found error.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
Available memory (MB): 38098
Available CPU cores: 24
Binaries:
Node: 20.10.0
npm: 10.2.3
Yarn: N/A
pnpm: 8.15.1
Relevant Packages:
next: 15.0.0-canary.139 // Latest available version is detected (15.0.0-canary.139).
eslint-config-next: N/A
react: 19.0.0-rc-fb9a90fa48-20240614
react-dom: 19.0.0-rc-fb9a90fa48-20240614
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
_No response_ | bug,Navigation | low | Critical |
2,503,081,189 | vscode | Emmet dot snippet is triggered after quotes | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: OS: Darwin arm64 23.6.0
Steps to Reproduce:
1. Create empty `index.html`
2. Type `hello".` and hit enter (accept suggestion)
3. A snippet is inserted -- `hello"<div class=""></div>`

I'm not sure whether this is intended behaviour or not, but looks more like a bug to me. | bug,emmet | low | Critical |
2,503,118,856 | node | Node 22.1+ crashes process when a lot of VM processes are created/destroyed in parallel workers | ### Version
22.1.0
### Platform
```text
Darwin MacBook-Air.local 23.5.0 Darwin Kernel Version 23.5.0: Wed May 1 20:16:51 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8103 arm64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
- Clone https://github.com/vitest-dev/vitest
- Run `pnpm install --prefer-offline`
- Run `pnpm build`
- Run `cd test/core`
- Run `pnpm test:vmThreads --run`
- See a crash - sometimes it's instant, sometimes it's after a few tests ran
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior? Why is that the expected behavior?
No crash
### What do you see instead?
`Command failed with exit code 129.` with no error.
### Additional information
This is most likely an issue with Vitest itself, but I am having a hard time debugging it. The error started happening on 22.1.0, 22.0.0 works as before. The changelog didn't help me.
The issue seems to be with Worker Threads + VM as `vmForks` pool works fine or at least doesn't crash completely (it uses `child_process` under the hood instead, but the code for VM is the same) - looks like `child_process` just exists without crashing the main process, but it also doesn't print anything useful. | child_process,regression,worker,v22.x | low | Critical |
2,503,122,793 | go | x/tools/go/ssa/ssautil: deprecate AllFunctions | I would like to deprecate the AllFunctions helper function, because it's a poorly specified mess (my bad).
As its doc comment says:
```go
// I think we should deprecate AllFunctions function in favor of two
// clearly defined ones:
//
// 1. The first would efficiently compute CHA reachability from a set
// of main packages, making it suitable for a whole-program
// analysis context with InstantiateGenerics, in conjunction with
// Program.Build.
//
// 2. The second would return only the set of functions corresponding
// to source Func{Decl,Lit} syntax, like SrcFunctions in
// go/analysis/passes/buildssa; this is suitable for
// package-at-a-time (or handful of packages) context.
// ssa.Package could easily expose it as a field.
```
Before we can consider doing that (which will require a proposal), we should stop using it ourselves in x/tools and x/vuln. Its existing uses are:
```
// These test could use a simpler ad hoc algorithm, or an efficient CHA:
go/ssa/stdlib_test.go
go/ssa/builder_test.go
// both CHA and static do essentially the same reachability fixed point
// iteration and could do (the good bits of) AllFunctions themselves:
go/callgraph/cha/cha.go -- AllFunctions is not even correct here! see #66429
go/callgraph/static/static.go
--
// VTA uses AllFunctions to produce a set of entry points.
// I suspect it is both a massive overapproximation and yet still not always correct (see #66251)
// What exactly does vta.CallGraph need?
cmd/callgraph/main.go
go/callgraph/vta/vta_test.go
go/callgraph/vta/graph_test.go
go/callgraph/callgraph_test.go
x/vuln/internal/vulncheck/utils.go
// just for debugging; could easily be deleted.
go/callgraph/vta/helpers_test.go
```
I will tackle the first half. @zpavlinovic perhaps you could look at the VTA-related ones? What we need at this stage is just crisp descriptions of the exact requirements of each of these algorithms.
| Tools | low | Critical |
2,503,141,192 | bitcoin | intermittent issue in wallet_upgradewallet.py AssertionError: [node 2] Node returned unexpected exit code (1) vs (0) when stopping | https://cirrus-ci.com/task/4629002713825280?logs=ci#L3296
```
test 2024-09-03T07:00:27.797000Z TestFramework (ERROR): Assertion failed
Traceback (most recent call last):
File "/ci_container_base/test/functional/test_framework/test_framework.py", line 132, in main
self.run_test()
File "/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/test/functional/wallet_upgradewallet.py", line 150, in run_test
self.stop_node(2)
File "/ci_container_base/test/functional/test_framework/test_framework.py", line 581, in stop_node
self.nodes[i].stop_node(expected_stderr, wait=wait)
File "/ci_container_base/test/functional/test_framework/test_node.py", line 409, in stop_node
self.wait_until_stopped(expected_stderr=expected_stderr)
File "/ci_container_base/test/functional/test_framework/test_node.py", line 444, in wait_until_stopped
self.wait_until(lambda: self.is_node_stopped(**kwargs), timeout=timeout)
File "/ci_container_base/test/functional/test_framework/test_node.py", line 842, in wait_until
return wait_until_helper_internal(test_function, timeout=timeout, timeout_factor=self.timeout_factor)
File "/ci_container_base/test/functional/test_framework/util.py", line 289, in wait_until_helper_internal
if predicate():
File "/ci_container_base/test/functional/test_framework/test_node.py", line 444, in <lambda>
self.wait_until(lambda: self.is_node_stopped(**kwargs), timeout=timeout)
File "/ci_container_base/test/functional/test_framework/test_node.py", line 423, in is_node_stopped
assert return_code == expected_ret_code, self._node_msg(
AssertionError: [node 2] Node returned unexpected exit code (1) vs (0) when stopping | CI failed | low | Critical |
2,503,189,777 | godot | Opening 2D/3D scene when no scene was previously open no longer switches to 2D/3D viewport | ### Tested versions
- Reproducible in 4.3.stable and 4.4.dev (514c564a8c855d798ec6b5a52860e5bca8d57bc9)
- Not reproducible in 4.2.2.stable
### System information
Fedora Linux 40 (KDE Plasma) - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 7600M XT (RADV NAVI33) - AMD Ryzen 7 7840HS w/ Radeon 780M Graphics (16 Threads)
### Issue description
Godot has this nifty behavior that it can detect the type of a scene (2D or 3D) based on its root node, and infer the viewport (2D/3D) to open for that scene accordingly.
This seems to have regressed in 4.3.stable and later.
What works:
- Open a 2D scene and make sure to be on the 2D viewport.
- Open a 3D scene: it will switch to the 3D viewport.
- Close the 2D scene, reopen it: it will switch to the 2D viewport.
What **doesn't** work:
- Have no scene open, make sure to be on the 2D viewport.
- Open a 3D scene: it stays on the 2D viewport.
- Close the 3D scene, make sure to be on the 3D viewport.
- Open a 2D scene: it stays on the 3D viewport.
CC @KoBeWi
### Steps to reproduce
See above.
### Minimal reproduction project (MRP)
[Test2DAnd3DScenes.zip](https://github.com/user-attachments/files/16851150/Test2DAnd3DScenes.zip)
| bug,topic:editor,usability,regression | low | Minor |
2,503,208,528 | pytorch | DISABLED test_streaming_backwards_callback (__main__.TestCuda) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_streaming_backwards_callback&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29601038632).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_streaming_backwards_callback`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1232, in not_close_error_metas
pair.compare()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 711, in compare
self._compare_values(actual, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 841, in _compare_values
compare_fn(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1023, in _compare_regular_values_close
if torch.all(matches):
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 1343, in test_streaming_backwards_callback
self.assertEqual(stash[0], torch.full_like(a, 6))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3846, in assertEqual
error_metas = not_close_error_metas(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1239, in not_close_error_metas
f"Comparing\n\n"
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 378, in __repr__
body = [
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 379, in <listcomp>
f" {name}={value!s},"
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 514, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 708, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 625, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 357, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 145, in __init__
nonzero_finite_vals = torch.masked_select(
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCuda.test_streaming_backwards_callback
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda_expandable_segments.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,503,229,599 | kubernetes | Slow `GVSpec` called for every object in `kubectl apply`, leading to excessive CPU usage | I am trying to apply a YAML file with 2264 objects. They are fairly small; 99996 lines total, or ~44 lines of YAML per object. They are all the same type
This takes 4 minutes to do a dry run. https://flamegraph.com/share/e60dcf49-6a0b-11ef-aba3-6a3d1814cbe4 shows a flamegraph.
The root cause of this is doing a full unmarshal of the OpenAPI v3 document for each input.
A quick hack to cache these results drops this down to only 15s:
```diff
diff --git a/staging/src/k8s.io/client-go/openapi3/root.go b/staging/src/k8s.io/client-go/openapi3/root.go
index 4333e8628f6..9c15fd7df2d 100644
--- a/staging/src/k8s.io/client-go/openapi3/root.go
+++ b/staging/src/k8s.io/client-go/openapi3/root.go
@@ -59,6 +59,7 @@ type Root interface {
type root struct {
// OpenAPI client to retrieve the OpenAPI V3 documents.
client openapi.Client
+ cache map[schema.GroupVersion]*spec3.OpenAPI
}
// Validate root implements the Root interface.
@@ -67,7 +68,7 @@ var _ Root = &root{}
// NewRoot returns a structure implementing the Root interface,
// created with the passed rest client.
func NewRoot(client openapi.Client) Root {
- return &root{client: client}
+ return &root{client: client, cache: make(map[schema.GroupVersion]*spec3.OpenAPI)}
}
func (r *root) GroupVersions() ([]schema.GroupVersion, error) {
@@ -93,6 +94,9 @@ func (r *root) GroupVersions() ([]schema.GroupVersion, error) {
}
func (r *root) GVSpec(gv schema.GroupVersion) (*spec3.OpenAPI, error) {
+ if c, f := r.cache[gv]; f {
+ return c, nil
+ }
openAPISchemaBytes, err := r.retrieveGVBytes(gv)
if err != nil {
return nil, err
@@ -100,6 +104,7 @@ func (r *root) GVSpec(gv schema.GroupVersion) (*spec3.OpenAPI, error) {
// Unmarshal the downloaded Group/Version bytes into the spec3.OpenAPI struct.
var parsedV3Schema spec3.OpenAPI
err = json.Unmarshal(openAPISchemaBytes, &parsedV3Schema)
+ r.cache[gv] = &parsedV3Schema
return &parsedV3Schema, err
}
```
I am submitting this as an issue instead of a PR since I don't really have a clue how this open API stuff is used more broadly, so not sure if an unbounded unconditional cache is appropriate | sig/api-machinery,triage/accepted | low | Critical |
2,503,283,833 | vscode | [scss] inbuild sass/scss extension shows error that is none |
Type: <b>Bug</b>
The provided code shows errors from `scss(css-rcurlyexpected)` but will be compiled and work as expected
```file.scss
// Mixin for slide animations
@mixin slideX($from, $to) {
0% {
transform: translateX($from);
}
100% {
transform: translateX($to);
}
}
// Animations using the mixins
@keyframes slide-left {
@include slideX(0, -100%);
}
@keyframes slide-right {
@include slideX(-100%, 0);
}
```
I think it's a problem with the inbuild sass/scss extesions. The original repo for this is archived, so I report it here.
VS Code version: Code 1.92.2 (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Linux x64 6.8.0-40-generic
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i5-13600K (20 x 5100)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|1, 0, 0|
|Memory (System)|31.12GB (26.81GB free)|
|Process Argv|/home/jannes/git/Leaflet.SidePanel --crash-reporter-id 1b52595c-35cb-48ee-8645-6520db63350d|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|cinnamon|
|XDG_CURRENT_DESKTOP|X-Cinnamon|
|XDG_SESSION_DESKTOP|cinnamon|
|XDG_SESSION_TYPE|x11|
</details><details><summary>Extensions (20)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-intelephense-client|bme|1.12.5
vscode-eslint|dba|3.0.10
prettier-vscode|esb|11.0.0
copilot|Git|1.224.0
copilot-chat|Git|0.18.2
vscode-github-actions|git|0.26.3
todo-tree|Gru|0.0.226
vscode-language-pack-de|MS-|1.92.2024081409
java|red|1.34.0
format-html-in-php|rif|1.7.0
pdf|tom|1.2.2
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.0
vscode-java-dependency|vsc|0.24.0
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.42.0
vscode-maven|vsc|0.44.0
volar|Vue|2.1.4
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonregdiag2:30936856
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
9c06g630:31013171
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl1:31104043
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31111718
wkspc-ranged-c:31125598
fje88620:31121564
aajjf12562:31125793
```
</details>
<!-- generated by issue reporter --> | bug,help wanted,css-less-scss | low | Critical |
2,503,306,983 | flutter | Backspace not working correctly when using physical keyboard in a TextFormField (keyboardType: TextInputType.none) | ### Steps to reproduce
1. Create a TextFormField with "keyboardType: TextInputType.none", not applying the virtual keyboard;
2. When using the TextFormField with a physical keyboard, the backspace entry is not working in the first interaction, neither targeting any listener. I need to press any other key to make it work.
### Expected results
The expected result is that the TextFormField works with physical keyboards backspace's **at the first interaction**.
At the moment, I need to press **any other key** on the physical keyboard, then the backspace work correctly.
**I need this to work because we have many devices with physical keyboard.**
### Actual results
The backspace not work or target any listener if that key **is the first** key that I interact with.
### Code sample
<details open><summary>Code sample</summary>
```dart
TextFormField(
enabled: widget.enabled,
keyboardType: TextInputType.none,
onChanged: widget.onChanged,
onEditingComplete: widget.onEditingComplete,
onFieldSubmitted: widget.onFieldSubmitted,
contextMenuBuilder: (context, editableTextState) {
return widget.enableInteractiveSelection
? AdaptiveTextSelectionToolbar.editableText(editableTextState: editableTextState)
: const SizedBox();
},
enableInteractiveSelection: widget.enableInteractiveSelection,
autocorrect: widget.autocorrect,
enableSuggestions: widget.enableSuggestions,
maxLength: widget.digitsOnly ? 8 : widget.maxLength,
focusNode: widget.focusNode,
autofocus: widget.isAutoFocus,
onTap: _callOnTap,
initialValue: widget.initialValue,
maxLines: widget.maxLines,
controller: widget.controller,
validator: widget.validatorFunction,
inputFormatters: widget.inputFormatters,
style: widget.handler.style.textFieldStyle,
)
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/b5f1dddb-6037-4d41-ae8c-8a289600f91f
</details>
### Logs
<details open><summary>Logs</summary>
```console
There is no errors in the log.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on Microsoft Windows [versÆo 10.0.22631.4037], locale pt-BR)
[✓] Windows Version (Installed version of Windows is version 10 or higher)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Chrome - develop for the web
[!] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.8.3)
✗ Visual Studio is missing necessary components. Please re-run the Visual Studio installer for the "Desktop
development with C++" workload, and include these components:
MSVC v142 - VS 2019 C++ x64/x86 build tools
- If there are multiple build tool versions available, install the latest
C++ CMake tools for Windows
Windows 10 SDK
[✓] Android Studio (version 2022.3)
[✓] Android Studio (version 2024.1)
[✓] VS Code (version 1.92.2)
[✓] Connected device (4 available)
[✓] Network resources
```
</details>
| a: text input,platform-android,P2,team-text-input,triaged-text-input | low | Critical |
2,503,339,436 | flutter | ScrollViewKeyboardDismissBehavior.onDrag doesn't work when a scroll view is scrollable via AlwaysScrollableScrollPhysics | ### Steps to reproduce
On a phone:
1. Create a CustomScrollView with AlwaysScrollableScrollPhysics and ScrollViewKeyboardDismissBehavior.onDrag.
2. Add a Sliver inside containing a text field (one that fits on screen even with the keyboard open).
3. Tap inside the text field to make the keyboard appear.
4. Drag inside the scroll view.
### Expected results
The keyboard disappears.
### Actual results
The keyboard stays.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatelessWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(title),
),
body: const CustomScrollView(
keyboardDismissBehavior: ScrollViewKeyboardDismissBehavior.onDrag,
physics: AlwaysScrollableScrollPhysics(),
slivers: [SliverToBoxAdapter(child: TextField())],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.0, on macOS 14.6.1 23G93 darwin-arm64, locale
en-CZ)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.1)
[✓] Connected device (4 available)
! Error: Browsing on the local area network for iPhone. Ensure the device is unlocked and attached with a cable or associated with
the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• No issues found!
```
</details>
| platform-android,framework,f: scrolling,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.25 | low | Critical |
2,503,362,638 | flutter | Inherited Theme: zero rebuilds | ### Document Link
[flutter.dev/go/zero-rebuilds](https://flutter.dev/go/zero-rebuilds)
### What problem are you solving?
- https://github.com/flutter/flutter/issues/89127
<br>
### Sub-issues
- [x] #155849
- [x] #155851
- [x] #155852
- [ ] #155853 | design doc,:scroll:,f: theming | low | Minor |
2,503,366,104 | angular | Generic types of components/directives are inferred as `any` when some corresponding inputs are omitted | ### Command
serve
### Is this a regression?
- [ ] Yes, this behavior used to work in the previous version
### The previous version in which this bug was not present was
_No response_
### Description
I recently started to notice that after introducing `strictTemplates` and `strictNullChecks` in my project, WebStorm still sometimes highlights types incompatibility. So I started investigating whether it's my IDE's problem or not.
Let's assume that we have a method in component, which accepts string as an argument. If we try to call it from a template with `undefined` or `null` explicitly, compiler would throw an error. However, this is not the case when library is involved. In my example [<mat-calendar>](https://material.angular.io/components/datepicker/api#MatCalendar) is used along with `(selectedChange)`.
`$event` can be null. And if I try to call that method with `$event.toString()`, compiler would not complain. Even something like `$event?.toString()` or `$event?.toString() ?? undefined` or `$event?.toString() || undefined` would still be fine for the compiler. At the same time something like `'' || undefined` will trigger a compilation error.
Here is the StackBlitz [link](https://stackblitz.com/edit/4yhgc4?file=src%2Fexample%2Fdatepicker-inline-calendar-example.html). This is a fork from the [official example](https://material.angular.io/components/datepicker/overview#using-mat-calendar-inline).
### Minimal Reproduction
- init strict application with `strictTemplates` & `strictNullChecks`
- create component with a method, which accepts string as a single argument
- call that method from the template using third-party library's component where `$event` can be `null`. E.g. `myMethod($event.toString())`
### Exception or Error
```text
This is what I would actually expect to see in such case, but no error was given.
```
### Your Environment
```text
Angular CLI: 18.2.1
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 18.2.2
... animations, cdk, common, compiler, compiler-cli, core, forms
... localize, material, material-moment-adapter
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.1
@angular-devkit/build-angular 18.2.1
@angular-devkit/core 18.2.1
@angular-devkit/schematics 18.2.1
@angular/cli 18.2.1
@schematics/angular 18.2.1
rxjs 7.4.0
typescript 5.5.4
zone.js 0.14.10
```
### Anything else relevant?
_No response_ | state: has PR,area: compiler,core: inputs / outputs,P3,compiler: template type-checking,bug | low | Critical |
2,503,372,621 | excalidraw | Digital pen eraser button not interacting with Excalidraw | I own both a Samsung Tablet and laptop. In my laptop, when I open an Excalidraw drawing, the digital pen's eraser button does perform the erasing action while on the tablet it does nothing. | tablet | low | Minor |
2,503,387,093 | pytorch | reinplacing logging is scary | "torch/_inductor/fx_passes/reinplace.py:543] [12/1_1] For node _attn_bwd, attempted to reinplace ['DQ', 'DK', 'DV']. We were unable to reinplace []; [] (if non-empty) are possible missed reinplacing opportunities that may be bad for memory usage and performance."
I got asked what this logging message means. This one is positive: all tensors were reinplaced correctly. We should reword it to sound less scary.
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Major |
2,503,401,367 | go | runtime: implement procyield as ISB instruction on arm64 | When looking into #68578, I found that the implementation of `runtime.procyield` on GOARCH=arm64 uses the `YIELD` instruction, and that the `YIELD` instruction is in effect a (fast) `NOP`.
The current code is at https://github.com/golang/go/blob/9a4fe7e14a4f71267f929c5545916f9830a89187/src/runtime/asm_arm64.s#L917-L923 , and I added a benchmark, `runtime.BenchmarkProcYield`, at https://go.dev/cl/601396 .
The difference in delay between amd64 (slow, using `PAUSE`) and arm64 (fast, using `YIELD`) makes it hard to be confident in the tuning of the `runtime.lock2` spin loop. Note that it's easy to tune the spin loop for the specific duration of a microbenchmark's critical section, which might not be the best tuning for Go overall.
It looks like Rust uses `ISB SY`, https://github.com/rust-lang/rust/blob/d6c8169c186ab16a3404cd0d0866674018e8a19e/library/core/src/hint.rs#L291-L295 , changed in https://github.com/rust-lang/rust/commit/c064b6560b7ce0adeb9bbf5d7dcf12b1acb0c807 . I've confirmed that using `ISB` results in a longer delay on the hardware most easily available to me (M1 MacBook Air), which I'd expect to be a benefit to `runtime.lock2`, both reducing the likelihood of acquiring the lock without a sleep and controlling the electrical energy used to do so.
What I'd most prefer is for Go 1.24 to include a fix for #68578 , and for that to be the _only_ change to `runtime.lock2` in the Go 1.24 cycle (so it's clear whether that change is to blame for any changes in mutex performance). But I'm opening this now so we at least don't lose track of it.
CC @golang/runtime @golang/arm | Performance,NeedsInvestigation,arch-arm64,compiler/runtime | low | Major |
2,503,409,417 | go | os: TestGetwdDeep failures [consistent failure] | ```
#!watchflakes
default <- pkg == "os" && test == "TestGetwdDeep"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8737801713094473073)):
=== RUN TestGetwdDeep
getwd_unix_test.go:57: Getwd len: 262
getwd_unix_test.go:57: Getwd len: 463
getwd_unix_test.go:57: Getwd len: 664
getwd_unix_test.go:57: Getwd len: 865
getwd_unix_test.go:57: Getwd len: 0
getwd_unix_test.go:59: getwd: invalid argument
--- FAIL: TestGetwdDeep (0.07s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsFix | low | Critical |
2,503,497,066 | ui | [bug]: AccordionTrigger throws error with asChild | ### Describe the bug
If you try to use the asChild prop with AccordionTrigger React throws an error because we try to render a icon and our children, but it can only render 1 (Radix limitation)
### Affected component/components
Accordion
### How to reproduce
Add asChild prop to an AccordionTrigger
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
None
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,503,518,855 | godot | Meshes without UV2 cannot block light / contribute to LightmapGI | ### Tested versions
- Tested in Godot 4.3-stable
### System information
Godot v4.3.stable - Windows 10.0.22621 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 Ti (NVIDIA; 31.0.15.3742) - Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz (12 Threads)
### Issue description
Meshes that have no UV2, even when the GeometryInstance3D `gi_mode` is set to Static, do not appear to contribute any shadows to the LightmapGI.

Unwrapping the UV2 does allow things to contribute, but to me this doesn't make any sense. They should still be capable of being rasterized by the lightmapper for direct light even if they have no UV2 to allow for writing to the lightmap, and it should be up to the `gi_mode` to dictate if it is contributing. I'd understand if they couldn't have bounce light when `use_texture_for_bounces` is true, but it should still be visible to the direct light.

Currently this makes setting up "light blockers", like gobos (although texture gobos wouldn't work anyway without #90109), or to prevent light bleeding, annoying. You have to make sure they have UV2s, and then having a hint size small to not waste space on the atlas. There doesn't seem to be a better way to do it then that
### Steps to reproduce
Just bake a LightmapGI in a scene with meshes that have no UV2 alongside meshes that do.
### Minimal reproduction project (MRP)
N/A | discussion,topic:rendering,documentation,topic:3d | low | Major |
2,503,546,091 | flutter | [iOS] Consider using Skia PathOps to compute path intersections for iOS platform views. | In https://github.com/flutter/engine/pull/54820#pullrequestreview-2278206731 , we need to fall back to using software rasterized clips when a platform view is impacted by multiple clip paths. The iOS clipping path treats the path values as a union, whereas we need an intersection.
Instead , we could use skia's path ops library to compute the intersected path ourselves. We'd need the original mutator SkPaths for this, but it should otherwise be possible. | engine,P2,team-ios,triaged-ios | low | Minor |
2,503,579,410 | stable-diffusion-webui | [Bug]: runwayml removed SD1.5 repo | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
when no checkpoint/safetensor download, model will pull the default sd1.5 safetensor from runwayml HF repo. recently they removed it
### Steps to reproduce the problem
run the webUI script as usual
### What should have happened?
if no safetensor/checkpoint is download, it shall pull the sd1.5 safetensor from HF
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-09-03-19-04.json](https://github.com/user-attachments/files/16853635/sysinfo-2024-09-03-19-04.json)
### Console logs
```Shell
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
Launching Web UI with arguments: --listen
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to /home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
loading stable diffusion model: FileNotFoundError
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/sd_models.py", line 693, in get_sd_model
load_model()
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/sd_models.py", line 788, in load_model
checkpoint_info = checkpoint_info or select_checkpoint()
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/sd_models.py", line 234, in select_checkpoint
raise FileNotFoundError(error_message)
FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at:
- file /home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/model.ckpt
- directory /home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/models/Stable-diffusionCan't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations.
Stable diffusion model failed to load
Applying attention optimization: Doggettx... done.
Running on local URL: http://0.0.0.0:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 11.8s (prepare environment: 4.6s, import torch: 2.6s, import gradio: 0.6s, setup paths: 2.6s, other imports: 0.6s, list SD models: 0.1s, load scripts: 0.2s, create ui: 0.3s).
loading stable diffusion model: FileNotFoundError
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/home/user1/Downloads/play_ground/test/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/home/user1/Downloads/play_ground/test/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/ui.py", line 1165, in <lambda>
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/sd_models.py", line 693, in get_sd_model
load_model()
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/sd_models.py", line 788, in load_model
checkpoint_info = checkpoint_info or select_checkpoint()
File "/home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/modules/sd_models.py", line 234, in select_checkpoint
raise FileNotFoundError(error_message)
FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at:
- file /home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/model.ckpt
- directory /home/user1/Downloads/play_ground/drawing/stable-diffusion-webui/models/Stable-diffusionCan't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations.
Stable diffusion model failed to load
```
### Additional information
this repo has original sd1.5 archive saved
https://huggingface.co/botp/stable-diffusion-v1-5 | bug-report | low | Critical |
2,503,583,485 | deno | [webgpu] backend not found when use onnxruntime-web | Version: Deno 1.46.2
```ts
import { InferenceSession } from "npm:onnxruntime-web";
const modelFile = await Deno.readFile("./model.onnx");
InferenceSession.create(modelFile, {
executionProviders: ["webgpu"],
});
```
`DENO_FUTURE=1 deno run -A main.ts`
The error:
```
error: Uncaught (in promise) Error: no available backend found. ERR: [webgpu] backend not found.
at resolveBackendAndExecutionProviders (file:///home/jlucaso/.cache/deno/npm/registry.npmjs.org/onnxruntime-common/1.19.0/dist/esm/backend-impl.js:120:15)
at eventLoopTick (ext:core/01_core.js:175:7)
at async Function.create (file:///home/jlucaso/.cache/deno/npm/registry.npmjs.org/onnxruntime-common/1.19.0/dist/esm/inference-session-impl.js:180:52)
```
| bug,webgpu,node compat | low | Critical |
2,503,620,761 | go | x/crypto/ssh/test: TestRunCommandStdin failures | ```
#!watchflakes
default <- pkg == "golang.org/x/crypto/ssh/test" && test == "TestRunCommandStdin"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8738963458115434353)):
=== RUN TestRunCommandStdin
session_test.go:78: session failed: EOF
test_unix_test.go:246: sshd:
Error reading managed configuration (2: No such file or directory). Proceeding with default configuration.
/Users/swarming/.swarming/w/ir/x/t/sshtest1408656954/sshd_config line 10: Deprecated option KeyRegenerationInterval
/Users/swarming/.swarming/w/ir/x/t/sshtest1408656954/sshd_config line 11: Deprecated option ServerKeyBits
/Users/swarming/.swarming/w/ir/x/t/sshtest1408656954/sshd_config line 17: Deprecated option RSAAuthentication
/Users/swarming/.swarming/w/ir/x/t/sshtest1408656954/sshd_config line 22: Deprecated option RhostsRSAAuthentication
debug1: inetd sockets after dupping: 4, 5
BSM audit: getaddrinfo failed for UNKNOWN: nodename nor servname provided, or not known
...
debug1: auth_activate_options: setting new authentication options
Accepted publickey for swarming from UNKNOWN port 65535 ssh2: ECDSA SHA256:DbuSF5a8c3JMmpZ5WiK8oLAx97Uu8zIAFReb/NyTPuo
debug1: monitor_child_preauth: user swarming authenticated by privileged process
debug1: auth_activate_options: setting new authentication options [preauth]
debug2: userauth_pubkey: authenticated 1 pkalg ecdsa-sha2-nistp256 [preauth]
debug1: monitor_read_log: child log fd closed
BSM audit: bsm_audit_session_setup: setaudit_addr failed: Invalid argument
Could not create new audit session
debug1: do_cleanup
--- FAIL: TestRunCommandStdin (0.13s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,503,656,044 | vscode | API to expose the shell's actual environment to extensions | null | feature-request,api,on-testplan,api-proposal,terminal-shell-integration | low | Minor |
2,503,663,649 | pytorch | torch.cond export succeeds with strict, fails in non-strict with "Could not extract specialized integer from data-dependent expression" | ### 🐛 Describe the bug
This is a synthetic example I generated while trying to make a reproducer for https://fb.workplace.com/groups/6829516587176185/posts/7705964779531357/ So I don't care about it per se but it might shed light on some preexisting problem you may want to fix.
This test
```python
def test_cond_contains_unbacked_no_escape(self):
class M(torch.nn.Module):
def forward(self, a, b):
az = a.nonzero()
def true_fn(x):
b0 = b.item()
return x * b0
def false_fn(x):
return x + 1
r = torch.cond(az.size(0) > 3, true_fn, false_fn, (az,))
return r * 2
args = (torch.randn(7), torch.tensor([4]))
M()(*args)
torch.export.export(M(), args, strict=False)
```
passes when strict=True, but fails with strict=False:
```
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/lazy.py", line 29, in realize
self.vt = VariableBuilder(tx, self.source)(self.value)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builder.py", line 375, in __call__
vt = self._wrap(value)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builder.py", line 541, in _wrap
return type_dispatch(self, value)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builder.py", line 1586, in wrap_tensor
tensor_variable = wrap_fx_proxy(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builder.py", line 2030, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builder.py", line 2142, in wrap_fx_proxy_cls
example_value = wrap_to_fake_tensor_and_record(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builder.py", line 2697, in wrap_to_fake_tensor_and_record
fake_e = wrap_fake_exception(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/utils.py", line 1574, in wrap_fake_exception
return fn()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builder.py", line 2698, in <lambda>
lambda: tx.fake_mode.from_tensor(
File "/data/users/ezyang/a/pytorch/torch/_subclasses/fake_tensor.py", line 2250, in from_tensor
return self.fake_tensor_converter.from_real_tensor(
File "/data/users/ezyang/a/pytorch/torch/_subclasses/fake_tensor.py", line 374, in from_real_tensor
out = self.meta_converter(
File "/data/users/ezyang/a/pytorch/torch/_subclasses/meta_utils.py", line 1660, in __call__
r = self.meta_tensor(
File "/data/users/ezyang/a/pytorch/torch/_subclasses/meta_utils.py", line 1441, in meta_tensor
) = sym_sizes_strides_storage_offset(t, source, symbolic_context)
File "/data/users/ezyang/a/pytorch/torch/_subclasses/meta_utils.py", line 753, in sym_sizes_strides_storage_offset
t_size = tuple(
File "/data/users/ezyang/a/pytorch/torch/_subclasses/meta_utils.py", line 754, in <genexpr>
shape_env._maybe_specialize_sym_int_with_hint(sz)
File "/data/users/ezyang/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 3073, in _maybe_specialize_sym_int_with_hint
return maybe_sym.node.require_hint()
File "/data/users/ezyang/a/pytorch/torch/fx/experimental/sym_node.py", line 183, in require_hint
return self.shape_env.size_hint(self.expr)
File "/data/users/ezyang/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 4668, in size_hint
raise self._make_data_dependent_error(result_expr, expr)
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not extract specialized integer from data-dependent expression u0 (unhinted: u0)
. (Size-like symbols: u0)
Potential framework code culprit (scroll up for full backtrace):
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builder.py", line 2698, in <lambda>
lambda: tx.fake_mode.from_tensor(
```
It looks related to how we try to get hints for the outer variables but fail to do so. @ydwu4 didn't you patch one of these for a different case? Here's another one.
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @pianpwk @zou3519
### Versions
main | oncall: pt2,oncall: export | low | Critical |
2,503,663,816 | vscode | SQL syntax highlighting breaks with string inside f-string | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: Version: 1.92.2
- OS Version: macOS Sequoia 15.0 public beta
Steps to Reproduce:
1. When inside a cell of a jupyter notebook, create an f-string to store SQL query text.
2. As soon as a string is used inside of the f-string, the syntax highlighting breaks
Attached image:
1. Examples of broken syntax, non broken syntax without a string, and .sql file with correct syntax of this example
[syntax_highlighting_bug.pdf](https://github.com/user-attachments/files/16854452/syntax_highlighting_bug.pdf) | bug,notebook | low | Critical |
2,503,680,272 | TypeScript | Adjust relative imports in TypeScript when using copy-paste | When copying an import from a file, it would be nice if pasting it in another file would adjust the path
Copying the import from `src/folder1/a.ts`:
```
//@filename: src/folder1/a.ts
import { x } from './b';
```
It would be very useful if pasting it in `src/folder2/a.ts` would generate (note how the import path was correctly adjusted):
```
//@filename: src/folder2/a.ts
import { x } from '../folder1/b';
```
This will be very useful once we switch to relative imports starting from tomorrow. | Suggestion,Experience Enhancement | low | Minor |
2,503,690,510 | pytorch | Setting a `bool` tensor both to `min` and `max` argument of `clamp()` gets error while setting a `bool` tensor only to `min` or `max` argument of `clamp()` works | ### 🐛 Describe the bug
Setting a `bool` tensor both to `min` and `max` argument of [clamp()](https://pytorch.org/docs/stable/generated/torch.clamp.html) gets the error as shown below:
```python
import torch
my_tensor = torch.tensor([True, False, True, False])
torch.clamp(input=my_tensor,
min=torch.tensor([False, True, False, True]),
max=torch.tensor([False, True, False, True])) # Error
```
> RuntimeError: "clamp_cpu" not implemented for 'Bool'
But setting a `bool` tensor only to `min` or `max` argument of `clamp()` works as shown below:
```python
import torch
my_tensor = torch.tensor([True, False, True, False])
torch.clamp(input=my_tensor,
min=torch.tensor([False, True, False, True]))
# tensor([True, True, True, True])
torch.clamp(input=my_tensor,
max=torch.tensor([False, True, False, True]))
# tensor([False, False, False, False])
```
### Versions
```python
import torch
torch.__version__ # 2.4.0+cu121
```
cc @albanD @malfet | triaged,actionable,module: python frontend,module: edge cases | low | Critical |
2,503,704,301 | PowerToys | Mouse Without Borders Issues + Question | ### Microsoft PowerToys version
0.84
### Installation method
GitHub
### Running as admin
None
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
I have not seen any errors produced in Event Viewer.
1. After waking from sleep it stops working.
2. Running as a service does not work because I have to open the programs again after a restart to work again.
Question. I only have 2 PCs connected and I want to only control one via the other and not vice versa. Is there a way to accomplish this?
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
PC 1: Windows 10
PC 2: Windows 11 | Issue-Bug,Needs-Triage | low | Critical |
2,503,725,959 | pytorch | Setting a scalar and a 0D tensor or a 0D tensor and a scalar to `min` and `max` argument of `clamp()` respectively works | ### 🐛 Describe the bug
Setting a scalar and a 1D tensor or a 1D tensor and a scalar to `min` and `max` argument of [clamp()](https://pytorch.org/docs/stable/generated/torch.clamp.html) respectively gets the error messages as shown below:
```python
import torch
my_tensor = torch.tensor([0., 1., 2., 3., 4., 5., 6., 7.])
torch.clamp(input=my_tensor, min=2., max=torch.tensor([5.])) # Error
torch.clamp(input=my_tensor, min=torch.tensor([2.]), max=5.) # Error
```
```
TypeError: clamp() received an invalid combination of arguments - got (max=Tensor, min=float, input=Tensor, ), but expected one of:
* (Tensor input, Tensor min = None, Tensor max = None, *, Tensor out = None)
* (Tensor input, Number min = None, Number max = None, *, Tensor out = None)
```
```
TypeError: clamp() received an invalid combination of arguments - got (max=float, min=Tensor, input=Tensor, ), but expected one of:
* (Tensor input, Tensor min = None, Tensor max = None, *, Tensor out = None)
* (Tensor input, Number min = None, Number max = None, *, Tensor out = None)
```
But setting a scalar and a 0D tensor or a 0D tensor and a scalar to `min` and `max` argument of `clamp()` respectively works as shown below:
```python
import torch
my_tensor = torch.tensor([0., 1., 2., 3., 4., 5., 6., 7.])
torch.clamp(input=my_tensor, min=2., max=torch.tensor(5.))
torch.clamp(input=my_tensor, min=torch.tensor(2.), max=5.)
# tensor([2., 2., 2., 3., 4., 5., 5., 5.])
```
### Versions
```python
import torch
torch.__version__ # 2.4.0+cu121
```
cc @albanD | triaged,actionable,module: python frontend,module: edge cases | low | Critical |
2,503,779,768 | pytorch | Compile time regression from loop ordering after fusion | 

Git bisect points to #126254
We expected some regression, but we should look into ways to fix it. I was a bit surprised that the regression happened even though it was disabled.
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Major |
2,503,826,227 | pytorch | [export] non-strict export error on torch.distributions.Normal | ### 🐛 Describe the bug
The following issues showed up when I ran **non-strict** export on models `soft_actor_critic` and `drq` in `torchbench`. There is a data-dependent expression when calling `torch.distributions.Normal`. However, running strict mode doesn't have such issues
**Repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import distributions as pyd
class SquashedNormal(pyd.transformed_distribution.TransformedDistribution):
def __init__(self, loc, scale, tanh_transform_clamp=(-0.99, 0.99)):
self.loc = loc
self.scale = scale
self.tanh_transform_clamp = tanh_transform_clamp
self.base_dist = pyd.Normal(loc, scale)
super().__init__(self.base_dist, [])
@property
def mean(self):
mu = self.loc
for tr in self.transforms:
mu = tr(mu)
return mu
def _squashed_normal_flatten(t: SquashedNormal):
return [t.loc, t.scale], t.tanh_transform_clamp
def _squashed_normal_unflatten(values, context):
return SquashedNormal(*values, context)
torch.utils._pytree.register_pytree_node(
SquashedNormal,
_squashed_normal_flatten,
_squashed_normal_unflatten,
serialized_type_name=f"{SquashedNormal.__module__}.{SquashedNormal.__name__}",
)
class StochasticActor(nn.Module):
def __init__(
self,
state_space_size=4,
act_space_size=2,
):
super().__init__()
self.fc = nn.Linear(state_space_size, act_space_size)
def forward(self, state):
out = F.relu(self.fc(state))
mu, log_std = out.chunk(2, dim=1)
log_std = torch.tanh(log_std)
std = (-10 + 6*(log_std+1)).exp()
dist = SquashedNormal(mu, std)
return dist
model = StochasticActor()
ep = torch.export.export(model, (torch.randn(1,4),), strict=False)
print (ep)
```
**Error Msg**
```
W0903 15:10:15.901000 2821440 torch/fx/experimental/symbolic_shapes.py:5128] failed during evaluate_expr(Eq(u0, 1), hint=None, expect_rational=True, size_oblivious=False, forcing_spec=False
E0903 15:10:15.901000 2821440 torch/fx/experimental/recording.py:298] failed while running evaluate_expr(*(Eq(u0, 1), None), **{'fx_node': False})
Traceback (most recent call last):
File "/home/yimingzhou/pytorch/torch/_export/non_strict_utils.py", line 513, in __torch_function__
return func(*args, **kwargs)
File "/home/yimingzhou/pytorch/torch/fx/experimental/sym_node.py", line 451, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
File "/home/yimingzhou/pytorch/torch/fx/experimental/recording.py", line 262, in wrapper
return retlog(fn(*args, **kwargs))
File "/home/yimingzhou/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5126, in evaluate_expr
return self._evaluate_expr(orig_expr, hint, fx_node, expect_rational, size_oblivious, forcing_spec=forcing_spec)
File "/home/yimingzhou/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5244, in _evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(u0, 1) (unhinted: Eq(u0, 1)). (Size-like symbols: none)
Potential framework code culprit (scroll up for full backtrace):
File "/home/yimingzhou/pytorch/torch/_export/non_strict_utils.py", line 513, in __torch_function__
return func(*args, **kwargs)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
The following call raised this error:
File "/home/yimingzhou/pytorch/pyd_normal_repro.py", line 11, in __init__
self.base_dist = pyd.Normal(loc, scale)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/yimingzhou/pytorch/pyd_normal_repro.py", line 57, in <module>
ep = torch.export.export(model, (torch.randn(1,4),), strict=False)
File "/home/yimingzhou/pytorch/torch/export/__init__.py", line 258, in export
return _export(
File "/home/yimingzhou/pytorch/torch/export/_trace.py", line 1007, in wrapper
raise e
File "/home/yimingzhou/pytorch/torch/export/_trace.py", line 980, in wrapper
ep = fn(*args, **kwargs)
File "/home/yimingzhou/pytorch/torch/export/exported_program.py", line 105, in wrapper
return fn(*args, **kwargs)
File "/home/yimingzhou/pytorch/torch/export/_trace.py", line 1915, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/yimingzhou/pytorch/torch/export/_trace.py", line 1719, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
File "/home/yimingzhou/pytorch/torch/export/_trace.py", line 626, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
File "/home/yimingzhou/pytorch/torch/export/_trace.py", line 1648, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
File "/home/yimingzhou/pytorch/torch/_functorch/aot_autograd.py", line 1246, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/home/yimingzhou/pytorch/torch/_functorch/aot_autograd.py", line 1480, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/home/yimingzhou/pytorch/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/yimingzhou/pytorch/torch/_functorch/aot_autograd.py", line 623, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/home/yimingzhou/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 168, in inner
flat_f_outs = f(*flat_f_args)
File "/home/yimingzhou/pytorch/torch/_functorch/_aot_autograd/utils.py", line 182, in flat_fn
tree_out = fn(*args, **kwargs)
File "/home/yimingzhou/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 863, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/home/yimingzhou/pytorch/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yimingzhou/pytorch/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yimingzhou/pytorch/torch/export/_trace.py", line 1635, in forward
tree_out = self._export_root(*args, **kwargs)
File "/home/yimingzhou/pytorch/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yimingzhou/pytorch/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yimingzhou/pytorch/pyd_normal_repro.py", line 52, in forward
dist = SquashedNormal(mu, std)
File "/home/yimingzhou/pytorch/pyd_normal_repro.py", line 11, in __init__
self.base_dist = pyd.Normal(loc, scale)
File "/home/yimingzhou/pytorch/torch/distributions/normal.py", line 59, in __init__
super().__init__(batch_shape, validate_args=validate_args)
File "/home/yimingzhou/pytorch/torch/distributions/distribution.py", line 70, in __init__
if not valid.all():
File "/home/yimingzhou/pytorch/torch/_export/non_strict_utils.py", line 515, in __torch_function__
_suggest_fixes_for_data_dependent_error_non_strict(e)
File "/home/yimingzhou/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5598, in _suggest_fixes_for_data_dependent_error_non_strict
for path, leaf in pytree.tree_leaves_with_path(val):
File "/home/yimingzhou/pytorch/torch/utils/_pytree.py", line 1544, in tree_leaves_with_path
return list(_generate_key_paths((), tree, is_leaf))
File "/home/yimingzhou/pytorch/torch/utils/_pytree.py", line 1570, in _generate_key_paths
raise ValueError(
ValueError: Did not find a flatten_with_keys_fn for type: <class '__main__.SquashedNormal'>. Please pass a flatten_with_keys_fn argument to register_pytree_node.
```
### Versions
main
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,module: dynamic shapes,oncall: export | low | Critical |
2,503,830,748 | vscode | textsearchprovider - Fix API wording | Some nits from https://github.com/microsoft/vscode/issues/226775
>
> > ```
> > * If explicitly contains a newline character (`\n`), the default search behavior
> > ```
>
> ```
> * will automatically enable {@link isMultiline}.
> */
> ```
>
> For clarity, I'd propose:
>
> ```
> - * If explicitly contains a newline character
> + * If pattern contains a newline character
> ```
>
...
>
> > ```
> > * `pattern` contains a newline character (`\n`).
> > ```
>
> maybe could use `{@link pattern}`
>
These are from https://github.com/microsoft/vscode/blob/a554d9e7e1353b2fbf16696a08ad25cb873dc2ac/src/vscode-dts/vscode.proposed.textSearchProviderNew.d.ts | bug,search,search-api | low | Minor |
2,503,836,548 | vscode | textsearchprovider - clarify that `includes` are relative to `folder` | https://github.com/microsoft/vscode/blob/a554d9e7e1353b2fbf16696a08ad25cb873dc2ac/src/vscode-dts/vscode.proposed.textSearchProviderNew.d.ts#L67-L70
It is unclear why the `includes` is a different shape than `excludes`. This is because `includes` are relative to the `folder` and cannot have a different `baseUri`.
From https://github.com/microsoft/vscode/issues/226775 | search,polish,search-api | low | Minor |
2,503,839,829 | pytorch | DISABLED test_streaming_backwards_multiple_streams (__main__.TestCuda) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_streaming_backwards_multiple_streams&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29626870350).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 30 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_streaming_backwards_multiple_streams`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1232, in not_close_error_metas
pair.compare()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 711, in compare
self._compare_values(actual, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 841, in _compare_values
compare_fn(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1023, in _compare_regular_values_close
if torch.all(matches):
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 1251, in test_streaming_backwards_multiple_streams
self.assertEqual(x_grad, torch.ones_like(x) * 5 * iters)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3846, in assertEqual
error_metas = not_close_error_metas(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1239, in not_close_error_metas
f"Comparing\n\n"
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 378, in __repr__
body = [
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 379, in <listcomp>
f" {name}={value!s},"
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 514, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 708, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 625, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 357, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 145, in __init__
nonzero_finite_vals = torch.masked_select(
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCuda.test_streaming_backwards_multiple_streams
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,503,839,883 | vscode | search APIs- deep links to interface fields do not resolve | https://github.com/microsoft/vscode/blob/a554d9e7e1353b2fbf16696a08ad25cb873dc2ac/src/vscode-dts/vscode.proposed.textSearchProviderNew.d.ts#L96
`@link` references to things like `TextSearchProviderOptions.useIgnoreFiles.local` do not resolve to the correct object. I should check these over to make sure that they are correct.
From https://github.com/microsoft/vscode/issues/226775 | bug,debt,search,search-api | low | Minor |
2,503,906,024 | vscode | Custom controls need to provide proper textual name, role, and state information | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: 14.6.1 (23G93)
Steps to Reproduce:
The current monaco-editor has some accessibility issues where few users will have the impact on the content they are viewing.
1. Controls to expand or collapse lines in the code editor lack accessible name, role, and state information, which prevents screen reader users from determining that these controls are meant to expand and collapse content or whether they are currently expanded or collapsed. To address this, ensure that custom controls provide proper textual name, role, and state information. These controls should either be `<button>` elements or have `role="button"`, include` aria-expanded="true/false"`, and have an accessible name set through aria-label or another means.
<img width="497" alt="codeeditor" src="https://github.com/user-attachments/assets/cc5d7f86-e4f6-46cb-ba5d-3704851a9677">
<img width="465" alt="codeeditor 2" src="https://github.com/user-attachments/assets/cb7a09b3-93ae-4c51-b37e-00798506ad63">
| bug,accessibility,editor-folding | low | Critical |
2,503,921,063 | deno | deno install'd script re-downloads jsr typechecking dependencies on every run | Version: Deno 1.46.2
### repro steps
install the script
`deno install --global --allow-run=ffprobe,ffmpeg --unstable-ffi --check --allow-read --allow-write --allow-ffi --allow-env=DENO_SQLITE_LOCAL,DENO_SQLITE_PATH,HOME,DENO_DIR,XDG_CACHE_HOME --allow-net --name forager-cli --force jsr:@forager/cli@0.4.2`
then run the following
`forager-cli 2>&1 | tee output.txt`
This will show
```
Download https://jsr.io/@forager/web/runtime/control.js
Download https://jsr.io/@forager/web/0.0.6/types
Download https://jsr.io/@forager/web/0.0.6/@sveltejs/kit
Download https://jsr.io/@forager/web/0.0.6/types.js
Download https://jsr.io/@forager/web/0.0.6/page/types.js
```
on every run. This output appears to go away when I remove the `--check` flag from the install command:
`deno install --global --allow-run=ffprobe,ffmpeg --unstable-ffi --check --allow-read --allow-write --allow-ffi --allow-env=DENO_SQLITE_LOCAL,DENO_SQLITE_PATH,HOME,DENO_DIR,XDG_CACHE_HOME --allow-net --name forager-cli --force jsr:@forager/cli@0.4.2`
I believe this is because the script compiles with typing references that are non-existent https://jsr.io/@forager/web/0.0.6/server.js#L40972
```ts
/** @type {import('../runtime/control.js').Redirect | HttpError | SvelteKitError | Error} */
```
I think this is possibly something I can fix with my build system to strip out these bad typing references (this is an autogenerated file deep inside my library, so typing isnt really important). It does feel like there is a deno bug if it continually tries to download a non existent dependency and fails to warn about that anywhere. Also more generally, I don't like that I have control over when my script reaches out to the internet (`--allow-net`) but I have no control over when deno reaches out to the internet. There is `DENO_NO_UPDATE_CHECK`, and `deno cache ...` but that is different than a kind of `DENO_NO_NET` env var or some such field that I can turn on when I want to avoid any network traffic from deno | bug,cli | low | Critical |
2,503,968,809 | neovim | apply extmark hl_group in breakindent "gap" space | ### Problem
To reproduce current behavior:
1. `echo " Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer ut eleifend metus. Proin velit dui, suscipit in viverra eu, scelerisque dictum elit." > test.txt`
2. `nvim -u NONE test.txt`
3. `:lua vim.api.nvim_buf_set_extmark(0, vim.api.nvim_create_namespace('test'), 0, 0, { end_row = 1, hl_group = 'DiffChange', hl_eol = true })`
4. `:set columns=50`
5. `:set breakindent`
This results in:
| Before setting `breakindent` | After setting `breakindent` |
| ----------------------------- | --------------------------- |
| <img width="449" alt="before-breakindent" src="https://github.com/user-attachments/assets/32f4348f-906c-4685-a0cc-5270928ce8dc"> | <img width="443" alt="after-breakindent" src="https://github.com/user-attachments/assets/3f7fdee1-3e1a-4698-9861-691dac1bf066"> |
### Expected behavior
Ideally the space created by `breakindent` would have the same highlight applied as any overlapping `extmark`s.
This is kind of similar in concept to: https://github.com/neovim/neovim/issues/23108, however rather than repeating virtual text on wrapped lines it repeats the highlight on any space created by wrapped lines, if that makes sense.
I'm unsure if this is technically feasible and the use case is rather small.
Related issue cut to a plugin I own: https://github.com/MeanderingProgrammer/render-markdown.nvim/issues/149.
- If there's a way I can highlight this space on the plugin side I'd love to know, I was unable to figure one out | enhancement,marks,highlight | low | Minor |
2,504,029,431 | PowerToys | ACOUNT | ### Description of the new feature / enhancement
CUENTRA ENLAZAADA A CUENTA DE MICROSOFT, BACKUP DE PORTAPALES, Y BACKUP DE ATAJOS DE TECLADO
### Scenario when this would be used?
CUENTRA ENLAZAADA A CUENTA DE MICROSOFT, BACKUP DE PORTAPALES, Y BACKUP DE ATAJOS DE TECLADO
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,504,036,981 | transformers | Qwen2-VL Doesn't Execute on TPUs | ### System Info
- `transformers` version: 4.45.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31
- Python version: 3.10.14
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.4
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.0.dev20240830+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
#Following this Qwen2-VL guide => https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct#quickstart
1. Script
```
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import numpy as np
import torch
import torch_xla as xla
import torch_xla.core.xla_model as xm
import torch_xla.distributed.spmd as xs
from torch.distributed._tensor import DeviceMesh, distribute_module
from torch_xla.distributed.spmd import auto_policy
from torch_xla import runtime as xr
from torch_xla.experimental.spmd_fully_sharded_data_parallel import (
_prepare_spmd_partition_spec,
SpmdFullyShardedDataParallel as FSDPv2,
)
import time
start = time.time()
device = xm.xla_device()
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct",
torch_dtype=torch.bfloat16,
attn_implementation="eager",
).to(device)
print(model.device)
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int4")
message = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "image1.jpg",
},
{"type": "text", "text": "Describe this image in detail."},
],
}
]
all_messages = [[message] for _ in range(1)]
for messages in all_messages:
# Preparation for inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :]
for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed,
skip_special_tokens=True,
clean_up_tokenization_spaces=False,
)
for i, text in enumerate(output_text):
print(f"Output {i}: {text}")
print(f"Time taken: {time.time() - start}")
```
2. Output Logs
```
kojoe@t1v-n-cb70f560-w-0:~/EasyAnimate/easyanimate/image_caption$ python caption.py
WARNING:root:libtpu.so and TPU device found. Setting PJRT_DEVICE=TPU.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████| 2/2 [00:00<00:00, 4.39it/s]
xla:0
```
### Expected behavior
The model works fine when chaging ```device``` to ```"cpu"```, but stuck executing on TPUs. The model should run on TPUs | Feature request,bug,TPU | low | Minor |
2,504,046,521 | terminal | Window Terminal sporadically fails to trim trailing white spaces. | ### Windows Terminal version
1.20.11781.0
### Windows build number
10.0.19043 Build 19043
### Other Software
This happens no matter where I am copying from.
### Steps to reproduce
Open terminal, copy and paste into a new terminal window. The behavior is sporadic, but happens with often. It makes it very hard to work in the terminal.
### Expected Behavior
pasted text will have the excess white space at the end of the line removed
### Actual Behavior
pasted text is treated like it was a single line that wrapped around, leading to long white spaces and no line breaks.
Behavior is sporadic. The problem is on the copy window side.
Example: Below data was copied and pasted. Some of the white space is removed, some is not.
```
secondary={} r=requests.get(HTML).content.decode().split("\n") for x in r[1:]:
if x:
if "NONE" not in x:
x=re.split("[\t ]",x)
pos=x[0].replace("_"," ")
if pos not in secondary:
secondary[pos]=set()
for v in x[1:]:
if "(" not in v:
secondary[pos]=secondary[pos]|set([v])
```
I add a few newlines above the text and now it copies correctly, but this is not a consistent fix
```
secondary={}
r=requests.get(HTML).content.decode().split("\n")
for x in r[1:]:
if x:
if "NONE" not in x:
x=re.split("[\t ]",x)
pos=x[0].replace("_"," ")
if pos not in secondary:
secondary[pos]=set()
for v in x[1:]:
if "(" not in v:
secondary[pos]=secondary[pos]|set([v])
```
| Issue-Bug,Area-TerminalControl,Product-Terminal,Priority-2 | low | Major |
2,504,113,281 | godot | rcedit Fails to Modify Resources with Console Wrapper Enabled | ### Tested versions
- Reproducible in v4.3.stable.mono and v4.2.x.stable.mono
- rcedit v2.0.0
### System information
Windows 11
### Issue description
When exporting a Godot project for Windows with the console wrapper enabled, the first executable (which is the console wrapper) properly has its icon and other metadata set by `rcedit`. But the second executable, (the executable without the wrapper) fails to have any of the its resources modified such as the icon and metadata and falls back to Godot defaults.
This is the error that it gives:
```
editor/export/editor_export_platform.h:179 - Resources Modification: rcedit failed to modify executable: Fatal
error: Unable to commit changes
```
`rcedit` appears to work as intended when the console wrapper is disabled such as when the release build is used since the default is to not include the console wrapper. It seems this may be an issue with having more than one executable to modify at one time since the console wrapper is the executable that's produced first.
### Steps to reproduce
- Create Godot project.
- Set up Windows export by specifying all metadata under the `Application` header.
- This includes things like specifying an `.ico` file, company name, etc.
- Export with console wrapper enabled.
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:export | low | Critical |
2,504,129,656 | transformers | oom when using adafactor optimizer in deepspeed | ### System Info
```python
- `transformers` version: 4.44.2
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.2
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A800 80GB PCIe
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
i'm running train_xl.sh in [this repo](https://github.com/yisol/IDM-VTON). and i change the 8bit adam optimizer to adafactor optimizer using transformers.optimization.Adafactor. i'm using two 40GB a100, deepspeed stage 2, batchsize=1,VTON-HD dataset.
the adafactor optimizer should use less gpu memory, because of less optimizer states than 8bit adam, but it get oom in [this line](https://github.com/huggingface/transformers/blob/ecd61c62862f925a18b4f063dc17fcaf01826e25/src/transformers/optimization.py#L877)
and oom happens after 10 steps, i don't know what happen in 10th step, i call the ```accelerate.backward()``` and``` optimizer.step()``` every step.
and in 10th step, the memory usage increased from 29GB to 39GB when using 8bit adam optimizer, and get oom when using adafactor optimizer
### Expected behavior
could anybody explain this phenomenon | Usage,Good First Issue,bug | low | Minor |
2,504,131,539 | tauri | [bug] Tauri 2.0: Building for Android in Linux: Execution failed for task ':app:rustBuildArmDebug'. | ### Describe the bug
Can't compile Tauri Project for Android.
This is a fresh, unmodified project created with pnpm and solidjs that works for the web on Linux.
However, after installing Android Studio and setting the proper environement variables, the project fails to compile for Android (see stack trace).
### Reproduction
Follow the steps in https://v2.tauri.app/start/prerequisites/ for Linux
Create a tauri app using pnpm and solidjs
Install Android Studio on Linux, set your environment variables, create a desktop entry for android studio
Run `pnpm tauri android dev --open`
Click on the green arrow in android studio
### Expected behavior
_No response_
### Full `tauri info` output
```text
❯ pnpm tauri info
> budget@0.1.0 tauri /home/simon/prog/tauri/budget
> tauri "info"
[✔] Environment
- OS: Pop!_OS 22.4.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.44.2
✔ rsvg2: 2.52.5
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 21.5.0
- pnpm: 9.9.0
- npm: 10.2.4
[-] Packages
- tauri 🦀: 2.0.0-rc.8
- tauri-build 🦀: 2.0.0-rc.7
- wry 🦀: 0.42.0
- tao 🦀: 0.29.1
- @tauri-apps/api : 2.0.0-rc.4
- @tauri-apps/cli : 2.0.0-rc.10
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.0-rc.3
- @tauri-apps/plugin-shell : 2.0.0-rc.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: SolidJS
- bundler: Vite
```
### Stack trace
```text
Execution failed for task ':app:rustBuildArmDebug'.
> Process 'command 'pnpm'' finished with non-zero exit value 1
* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:rustBuildArmDebug'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:130)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:293)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:128)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:116)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:42)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:331)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:318)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:314)
at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:85)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:314)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:303)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:459)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:376)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:48)
Caused by: org.gradle.process.internal.ExecException: Process 'command 'pnpm'' finished with non-zero exit value 1
at org.gradle.process.internal.DefaultExecHandle$ExecResultImpl.assertNormalExitValue(DefaultExecHandle.java:442)
at org.gradle.process.internal.DefaultExecAction.execute(DefaultExecAction.java:38)
at org.gradle.process.internal.DefaultExecActionFactory.exec(DefaultExecActionFactory.java:202)
at org.gradle.api.internal.project.DefaultProject.exec(DefaultProject.java:1196)
at BuildTask.runTauriCli(BuildTask.kt:37)
at BuildTask.assemble(BuildTask.kt:21)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:125)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.doExecute(StandardTaskAction.java:58)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:51)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:29)
at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:244)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:229)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:212)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:195)
at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:162)
at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:105)
at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:59)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:56)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:56)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:42)
at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:75)
at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:50)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:28)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:67)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:37)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:61)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:26)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:69)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:46)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:40)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:29)
at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:189)
at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:75)
at org.gradle.internal.Either$Right.fold(Either.java:175)
at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:62)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:73)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:48)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:46)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:35)
at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:75)
at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:53)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:53)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:35)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:49)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:27)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:71)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:39)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:65)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:36)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:105)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:54)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:64)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:43)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.executeWithNonEmptySources(AbstractSkipEmptyWorkStep.java:125)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:56)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:36)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:36)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:23)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:75)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:41)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.lambda$execute$0(AssignMutableWorkspaceStep.java:35)
at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:289)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:31)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:22)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:40)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:23)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.lambda$execute$2(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:39)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:46)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:34)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:48)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:35)
at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:61)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:127)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:116)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:42)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:331)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:318)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:314)
at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:85)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:314)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:303)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:459)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:376)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:48)
BUILD FAILED in 10s
79 actionable tasks: 5 executed, 74 up-to-date
```
### Additional context
At first, I wasn't even able to run `pnpm tauri android dev --open`, because `Android Studio` wasn't in my PATH (`studio.sh` was). I had to create a Desktop Entry for Android Studio to even be able to make `pnpm tauri android dev --open` open Android Studio. | type: bug,platform: Linux,status: needs triage,platform: Android | low | Critical |
2,504,143,031 | vscode | Tooltips Missing Name and Role Information | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: 14.6.1 (23G93)
**Description**: Certain tooltips, such as "Ln 13, Col 14," "Spaces: 2," and "UTF-8," lack proper name and role information, making them inaccessible to screen reader users.
**Steps to Reproduce:**
1. Hover over content in the status bar showing "Ln 13, Col 14," "Spaces: 2," or "UTF-8."
2. Observe that these tooltips do not have corresponding name and role information for screen reader users.
**User Impact**: Screen reader users are unable to determine the presence of these tooltips, limiting accessibility.
**Expected Behavior**: Tooltip-triggering controls should have `aria-describedby` set to the tooltip’s ID, and the tooltip element should have `role="tooltip"` to provide the necessary accessibility information.
**Actual Behavio**r: Tooltips are displayed without proper accessibility attributes, making them invisible to assistive technologies.
**Recommendation**: Ensure all tooltips have proper name, role, and state information. The control that opens the tooltip should use `aria-describedby`, and the tooltip element must include `role="tooltip"`.
<img width="555" alt="footer" src="https://github.com/user-attachments/assets/f721dff5-1406-4a82-8ed0-6ee1501df50b">
| bug,accessibility,workbench-hover | low | Critical |
2,504,178,908 | ollama | nvidia/NV-Embed-v2 support | Can you support the NVIDIA/NV-Embed-v2 model?
https://huggingface.co/nvidia/NV-Embed-v2 | model request | medium | Critical |
2,504,259,556 | pytorch | Inplace addmm within Inductor | ### 🚀 The feature, motivation and pitch
```
import torch
a = torch.randn(8, 8)
b = torch.randn(8, 8)
c = torch.randn(8, 8)
print(torch.addmm(a,b,c))
print(torch.addmm_(a,b,c))
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,actionable,oncall: pt2,module: inductor | low | Major |
2,504,262,554 | node | `--experimental-test-coverage` falsely reports missing coverage where TS source is `import type` | ### Version
v22.8.0
### Platform
```text
Darwin TRI-N93DLJDY6Y 23.6.0 Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000 arm64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
With this TypeScript source code in `src/a.mts` (where the import is from doesn't matter):
```ts
import type {} from "node:assert";
console.log("Hi");
```
Compiling that to `dist/a.mjs`:
```js
console.log("Hi");
export {};
//# sourceMappingURL=a.mjs.map
```
With source map `dist/a.mjs.map`:
```
{"version":3,"file":"a.mjs","sourceRoot":"","sources":["../src/a.mts"],"names":[],"mappings":"AAEA,OAAO,CAAC,GAAG,CAAC,IAAI,CAAC,CAAC"}
```
Then in `test.mjs`:
```js
import "./dist/a.mjs";
```
Run:
```sh
node --experimental-test-coverage --test test.mjs
```
### How often does it reproduce? Is there a required condition?
Every time.
### What is the expected behavior? Why is that the expected behavior?
100% code coverage reported for the module `src/a.mts`.
### What do you see instead?
Notice the false missing line coverage reported in the terminal output:
```
Hi
✔ test.mjs (49.142ms)
ℹ tests 1
ℹ suites 0
ℹ pass 1
ℹ fail 0
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms 53.724042
ℹ start of coverage report
ℹ ----------------------------------------------------------
ℹ file | line % | branch % | funcs % | uncovered lines
ℹ ----------------------------------------------------------
ℹ src/a.mts | 66.67 | 100.00 | 100.00 | 1
ℹ test.mjs | 100.00 | 100.00 | 100.00 |
ℹ ----------------------------------------------------------
ℹ all files | 75.00 | 100.00 | 100.00 |
ℹ ----------------------------------------------------------
ℹ end of coverage report
```
### Additional information
If you comment out the `import type {} from "node:assert";` and rebuild, a second run for functionally the same runtime code now correctly reports no missing coverage:
```
Hi
✔ test.mjs (47.03ms)
ℹ tests 1
ℹ suites 0
ℹ pass 1
ℹ fail 0
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms 51.619166
ℹ start of coverage report
ℹ ----------------------------------------------------------
ℹ file | line % | branch % | funcs % | uncovered lines
ℹ ----------------------------------------------------------
ℹ src/a.mts | 100.00 | 100.00 | 100.00 |
ℹ test.mjs | 100.00 | 100.00 | 100.00 |
ℹ ----------------------------------------------------------
ℹ all files | 100.00 | 100.00 | 100.00 |
ℹ ----------------------------------------------------------
ℹ end of coverage report
```
If you don't create source maps when compiling TypeScript modules containing `import type`, the Node.js test runner correctly reports no missing coverage. So the problem is around how Node.js is interpreting the source maps. Only runtime code should determine code coverage; not source code like TypeScript `import type` that is eliminated in the build.
Something else that is strange, is that the Node.js CLI flag `--enable-source-maps` doesn't seem to have an effect on how the Node.js test runner reports coverage; even without the flag it will always take into account the source maps information. Why is coverage exempt from respecting how `--enable-source-maps` works for other Node.js features? | coverage,source maps,test_runner | low | Critical |
2,504,296,783 | svelte | "default" is imported from external module "svelte-autosize" but never used | ### Describe the bug
"default" is imported from external module "svelte-autosize" but never used

### Reproduction
```
<script lang="ts">
import autosize from 'svelte-autosize'
</script>
<textarea use:autosize></textarea>
```
### Logs
_No response_
### System Info
```shell
Version: 1.92.1 (Universal)
Commit: eaa41d57266683296de7d118f574d0c2652e1fc4
Date: 2024-08-07T20:16:39.455Z
Electron: 30.1.2
ElectronBuildId: 9870757
Chromium: 124.0.6367.243
Node.js: 20.14.0
V8: 12.4.254.20-electron.0
OS: Darwin arm64 23.4.0
```
### Severity
annoyance | needs discussion | low | Critical |
2,504,303,973 | neovim | delete mapping by LHS only | ### Problem
`unmap` or `nvim_del_keymap` will delete all mapping with matched lhs or rhs, there should be a way to only delete mapping with matched lhs.
For example:
```lua
vim.api.nvim_set_keymap("n","ge","q",{})
vim.api.nvim_del_keymap("n","q")
print(vim.api.nvim_command("verbose map ge")) -- No mapping found
```
nvim_del_keymap will delete mapping `normal: ge -> q` even its lhs doesn't match.
There is no way to keep other mapping has `q` as rhs after unmap or nvim_del_keymap `q`.
### Expected behavior
A api to delete mapping with matched lhs only. | enhancement,api,compatibility,mappings | low | Major |
2,504,307,172 | pytorch | Any support for sparse linear layer in the future? | ### 🚀 The feature, motivation and pitch
Hello, I am currently training a deep learning model that requires first hidden layer to be sparse layer which connects to the input layer with some weights (connection) absent. I searched across the Internet and there are a few open-source implementation about that, ex [SparseLinear](https://github.com/hyeon95y/SparseLinear). I am wondering if there is any plan for Pytorch official implementation of that. Thanks in previous!
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | module: sparse,triaged,enhancement | low | Minor |
2,504,312,944 | PowerToys | Workspaces: Does Not Open PWA Windows Correctly | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Open Edge and multiple (PWA) windows (I normally open five spread across two monitors, but this is not necessary to demonstrate the problem). Create a workspace with the layout. Close all the windows and activate the workspace.
### ✔️ Expected Behavior
The workspace should open with the PWA windows in the same location they were when the workspace was created.
### ❌ Actual Behavior
The location of each PWA window in the workspace shows a generic instance of Edge with a tab opened to the new tab page.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response,Tracker,Product-Workspaces | medium | Major |
2,504,332,946 | vscode | Content Lacks Proper Heading Markup | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: 14.6.1 (23G93)
Description:
Certain content, such as "Extensions," "HTML" visually functions as headings but does not use proper heading markup, making navigation difficult for screen reader users.
Steps to Reproduce:
* Navigate to content like "Extensions" in VSCode.
* Use a screen reader to navigate and notice the lack of heading structure.
**User Impact**: Screen reader users will have difficulty efficiently navigating and gaining an accurate overview of the page due to missing heading markup.
**Expected Behavior**: Text that visually functions as a heading should use appropriate heading elements (`<h1>, <h2>, etc.`), with levels that reflect the visual hierarchy. For example:
"Extensions" should be `<h2>`,
"HTML" should be `<h3>`,
"HTML: Auto Closing Tags" should be `<h4>`.
**Actual Behavior**: Text visually appears as a heading but lacks the proper HTML heading tags, reducing accessibility.
**Recommendation**: Ensure that text functioning as headings uses appropriate heading tags (`<h1>, <h2>, etc.`) to improve navigation for screen reader users.
<img width="807" alt="headings" src="https://github.com/user-attachments/assets/5a319f16-072e-4a9a-ad97-360d3f0dcf2f">
| bug,accessibility,settings-editor | low | Critical |
2,504,334,664 | PowerToys | Mouse screen wraparound (not mouse without borders or mouse jump) | ### Description of the new feature / enhancement
This would be used on a single PC with multiple monitors, so if you have two or three monitors when you get to the right edge of your far right monitor, cursors wraps around to the most left one. Basically Pacman.
I know there's Mouse Jump, but it's not nearly the same.
### Scenario when this would be used?
For anyone with multiple monitors who make many mouse miles daily.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,504,400,919 | vscode | [Accessibility] Remove unnecessary role="tree", role="treeitem" and related aria-aatributes of tree structure in Settings. | - VS Code Version: 1.88.0
- OS Version: MacOS Sonoma (v14.6.1)
**Issue**: The main section of Extension's contributed in Vscode OSS through `package.json` has unnecessary `role="tree"` and r`ole="treeitem"` and other `aria-atttributes `of a tree structure. Please remove these `role="tree"` and other tree structure related attributes from the VSCode's main extension's settings section in order for assistive technologies to parse the content properly.
Screenshot of issue:

| bug,accessibility,settings-editor,confirmation-pending | low | Minor |
2,504,414,669 | ui | [feat]: Custom Implementation of Nested Data Table Using Shadcn Components | ### Feature description
Hello Shadcn team,
I wanted to share that I’ve built a custom implementation of a nested data table using Shadcn components. Currently, there isn’t an official or widely available nested data table built with Shadcn, so I took the initiative to develop one.
The nested data table supports:
Expandable and collapsible rows to display hierarchical data.
Customizable columns to fit various data structures.
Responsive design that adapts to different screen sizes.
Lightweight performance, making it suitable for handling large datasets.
I believe this implementation could be beneficial for the Shadcn community. If you’re interested, you can check out the repository here: https://github.com/Shubham996633/nested_table . I’m open to feedback or suggestions on how this could be integrated or improved.
Thank you for the incredible work on Shadcn, and I hope this contribution adds value to the project!
Best regards,
Shubham Maurya
### Affected component/components
Data Table
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Major |
2,504,424,137 | pytorch | Add MSCCL++ as a communication backend for PyTorch | ### 🚀 The feature, motivation and pitch
MSCCL++ redefines inter-GPU communication interfaces, offering a highly efficient and customizable communication stack tailored for distributed GPU applications in state-of-the-art AI systems. By integrating MSCCL++ as a PyTorch backend, users can leverage its lightweight, multi-layer abstractions to optimize performance in AI models requiring advanced inter-GPU communication.
Key features of MSCCL++ include:
Light-weight and multi-layer abstractions: Efficient data movement and collective operations implemented in GPU kernels.
1-sided 0-copy synchronous and asynchronous communication: Direct data transfer without intermediate buffers, reducing GPU bandwidth and memory usage.
Unified abstractions for different interconnection hardware: Consistent and simplified communication code, handling both local and remote GPU communication.
Integrating MSCCL++ will provide PyTorch users with advanced tools for fine-grained communication control and performance optimization, making it an ideal backend for large-scale distributed training scenarios.
### Alternatives
_No response_
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged,enhancement,module: nccl | low | Major |
2,504,424,718 | kubernetes | csidriver register failed and kubelet will not retry | ### What happened?
https://github.com/kubernetes/kubernetes/blob/95956671d8da7783a726133709b8085f56dda052/pkg/kubelet/pluginmanager/operationexecutor/operation_generator.go#L124-L126
When Kubelet registers the CSI, if the registration of the CSI plug-in fails due to some reasons, Kubelet notifies the CSI of the registration failure but does not retry. Is this reasonable?
Whether the retry operation is performed by the csi plug-in or by the kubelet
### What did you expect to happen?
The csi plug-in registration mechanism must have a retry mechanism to ensure reliable operation.
### How can we reproduce it (as minimally and precisely as possible)?
Create some network errors when the csi plug-in is registered.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.28
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,lifecycle/rotten,triage/needs-information,needs-triage | low | Critical |
2,504,426,477 | opencv | Linking CXX executable ../../bin/opencv_test_videostab, error: 'GInferOutputs' in namespace 'cv' does not name a type | ### System Information
OpenCV version: 4.8.0
Operating System / Platform: Ubuntu 22.04.3 LTS
Compiler & compiler version: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
### Detailed description
[100%] Linking CXX executable ../../bin/opencv_test_videostab
[100%] Built target opencv_test_videostab
In file included from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/build/modules/python_bindings_generator/pyopencv_custom_headers.h:21,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:88:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/python_bridge.hpp:85:27: error: 'GInferOutputs' in namespace 'cv' does not name a type
85 | GAPI_EXPORTS_W inline cv::GInferOutputs infer(const String& name, const cv::GInferInputs& inputs)
| ^~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/python_bridge.hpp:90:23: error: 'GInferOutputs' does not name a type
90 | GAPI_EXPORTS_W inline GInferOutputs infer(const std::string& name,
| ^~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/python_bridge.hpp:97:23: error: 'GInferListOutputs' does not name a type
97 | GAPI_EXPORTS_W inline GInferListOutputs infer(const std::string& name,
| ^~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/python_bridge.hpp:104:23: error: 'GInferListOutputs' does not name a type
104 | GAPI_EXPORTS_W inline GInferListOutputs infer2(const std::string& name,
| ^~~~~~~~~~~~~~~~~
In file included from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/build/modules/python_bindings_generator/pyopencv_custom_headers.h:22,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:88:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:15:49: error: 'GNetPackage' in namespace 'cv::gapi' does not name a type; did you mean 'GKernelPackage'?
15 | using gapi_GNetPackage = cv::gapi::GNetPackage;
| ^~~~~~~~~~~
| GKernelPackage
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:16:49: error: 'ie' in namespace 'cv::gapi' does not name a type
16 | using gapi_ie_PyParams = cv::gapi::ie::PyParams;
| ^~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:17:49: error: 'onnx' in namespace 'cv::gapi' does not name a type
17 | using gapi_onnx_PyParams = cv::gapi::onnx::PyParams;
| ^~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:18:49: error: 'ov' in namespace 'cv::gapi' does not name a type
18 | using gapi_ov_PyParams = cv::gapi::ov::PyParams;
| ^~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:22:61: error: 'GNetParam' is not a member of 'cv::gapi'
22 | using vector_GNetParam = std::vector<cv::gapi::GNetParam>;
| ^~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:22:61: error: 'GNetParam' is not a member of 'cv::gapi'
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:22:70: error: template argument 1 is invalid
22 | using vector_GNetParam = std::vector<cv::gapi::GNetParam>;
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:22:70: error: template argument 2 is invalid
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:25:54: error: 'GStreamerSource' in namespace 'cv::gapi::wip' does not name a type; did you mean 'IStreamSource'?
25 | using GStreamerSource_OutputType = cv::gapi::wip::GStreamerSource::OutputType;
| ^~~~~~~~~~~~~~~
| IStreamSource
In file included from /usr/include/python3.10/Python.h:74,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.hpp:20,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:5:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp: In function 'bool pyopencv_to(PyObject*, T&, const ArgInfo&) [with T = cv::util::variant<cv::gapi::wip::draw::Text, cv::gapi::wip::draw::FText, cv::gapi::wip::draw::Rect, cv::gapi::wip::draw::Circle, cv::gapi::wip::draw::Line, cv::gapi::wip::draw::Mosaic, cv::gapi::wip::draw::Image, cv::gapi::wip::draw::Poly>; PyObject = _object]':
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:177:65: error: 'pyopencv_gapi_wip_draw_Rect_TypePtr' was not declared in this scope; did you mean 'pyopencv_rapid_Tracker_TypePtr'?
177 | if (PyObject_TypeCheck(obj, reinterpret_cast<PyTypeObject*>(pyopencv_gapi_wip_draw_##Prim##_TypePtr))) \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:183:5: note: in expansion of macro 'TRY_EXTRACT'
183 | TRY_EXTRACT(Rect)
| ^~~~~~~~~~~
In file included from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/build/modules/python_bindings_generator/pyopencv_custom_headers.h:22,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:88:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:34: error: 'pyopencv_gapi_wip_draw_Rect_t' does not name a type
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:34: note: in definition of macro 'TRY_EXTRACT'
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:67: error: expected '>' before '*' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:183:5: note: in expansion of macro 'TRY_EXTRACT'
183 | TRY_EXTRACT(Rect)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:67: error: expected '(' before '*' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:183:5: note: in expansion of macro 'TRY_EXTRACT'
183 | TRY_EXTRACT(Rect)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:68: error: expected primary-expression before '>' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:183:5: note: in expansion of macro 'TRY_EXTRACT'
183 | TRY_EXTRACT(Rect)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:76: error: 'PyObject' {aka 'struct _object'} has no member named 'v'
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:183:5: note: in expansion of macro 'TRY_EXTRACT'
183 | TRY_EXTRACT(Rect)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:77: error: expected ')' before ';' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:183:5: note: in expansion of macro 'TRY_EXTRACT'
183 | TRY_EXTRACT(Rect)
| ^~~~~~~~~~~
In file included from /usr/include/python3.10/Python.h:74,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.hpp:20,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:5:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:177:65: error: 'pyopencv_gapi_wip_draw_Text_TypePtr' was not declared in this scope; did you mean 'pyopencv_rapid_Tracker_TypePtr'?
177 | if (PyObject_TypeCheck(obj, reinterpret_cast<PyTypeObject*>(pyopencv_gapi_wip_draw_##Prim##_TypePtr))) \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:184:5: note: in expansion of macro 'TRY_EXTRACT'
184 | TRY_EXTRACT(Text)
| ^~~~~~~~~~~
In file included from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/build/modules/python_bindings_generator/pyopencv_custom_headers.h:22,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:88:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:34: error: 'pyopencv_gapi_wip_draw_Text_t' does not name a type
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:34: note: in definition of macro 'TRY_EXTRACT'
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:67: error: expected '>' before '*' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:184:5: note: in expansion of macro 'TRY_EXTRACT'
184 | TRY_EXTRACT(Text)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:67: error: expected '(' before '*' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:184:5: note: in expansion of macro 'TRY_EXTRACT'
184 | TRY_EXTRACT(Text)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:68: error: expected primary-expression before '>' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:184:5: note: in expansion of macro 'TRY_EXTRACT'
184 | TRY_EXTRACT(Text)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:76: error: 'PyObject' {aka 'struct _object'} has no member named 'v'
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:184:5: note: in expansion of macro 'TRY_EXTRACT'
184 | TRY_EXTRACT(Text)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:77: error: expected ')' before ';' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:184:5: note: in expansion of macro 'TRY_EXTRACT'
184 | TRY_EXTRACT(Text)
| ^~~~~~~~~~~
In file included from /usr/include/python3.10/Python.h:74,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.hpp:20,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:5:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:177:65: error: 'pyopencv_gapi_wip_draw_Circle_TypePtr' was not declared in this scope; did you mean 'pyopencv_rapid_Tracker_TypePtr'?
177 | if (PyObject_TypeCheck(obj, reinterpret_cast<PyTypeObject*>(pyopencv_gapi_wip_draw_##Prim##_TypePtr))) \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:185:5: note: in expansion of macro 'TRY_EXTRACT'
185 | TRY_EXTRACT(Circle)
| ^~~~~~~~~~~
In file included from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/build/modules/python_bindings_generator/pyopencv_custom_headers.h:22,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:88:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:34: error: 'pyopencv_gapi_wip_draw_Circle_t' does not name a type
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:34: note: in definition of macro 'TRY_EXTRACT'
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:67: error: expected '>' before '*' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:185:5: note: in expansion of macro 'TRY_EXTRACT'
185 | TRY_EXTRACT(Circle)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:67: error: expected '(' before '*' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:185:5: note: in expansion of macro 'TRY_EXTRACT'
185 | TRY_EXTRACT(Circle)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:68: error: expected primary-expression before '>' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:185:5: note: in expansion of macro 'TRY_EXTRACT'
185 | TRY_EXTRACT(Circle)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:76: error: 'PyObject' {aka 'struct _object'} has no member named 'v'
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:185:5: note: in expansion of macro 'TRY_EXTRACT'
185 | TRY_EXTRACT(Circle)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:77: error: expected ')' before ';' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:185:5: note: in expansion of macro 'TRY_EXTRACT'
185 | TRY_EXTRACT(Circle)
| ^~~~~~~~~~~
In file included from /usr/include/python3.10/Python.h:74,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.hpp:20,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:5:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:177:65: error: 'pyopencv_gapi_wip_draw_Line_TypePtr' was not declared in this scope; did you mean 'pyopencv_rapid_Rapid_TypePtr'?
177 | if (PyObject_TypeCheck(obj, reinterpret_cast<PyTypeObject*>(pyopencv_gapi_wip_draw_##Prim##_TypePtr))) \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:186:5: note: in expansion of macro 'TRY_EXTRACT'
186 | TRY_EXTRACT(Line)
| ^~~~~~~~~~~
In file included from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/build/modules/python_bindings_generator/pyopencv_custom_headers.h:22,
from /workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2.cpp:88:
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:34: error: 'pyopencv_gapi_wip_draw_Line_t' does not name a type
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:34: note: in definition of macro 'TRY_EXTRACT'
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:67: error: expected '>' before '*' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:186:5: note: in expansion of macro 'TRY_EXTRACT'
186 | TRY_EXTRACT(Line)
| ^~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:179:67: error: expected '(' before '*' token
179 | value = reinterpret_cast<pyopencv_gapi_wip_draw_##Prim##_t*>(obj)->v; \
| ^
...........
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp:60:75: error: 'from' is not a member of 'PyOpenCV_Converter<cv::GArrayDesc, void>'
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp: In instantiation of 'PyObject* pyopencv_from(const T&) [with T = cv::GOpaqueDesc; PyObject = _object]':
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:781:38: required from here
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp:60:75: error: 'from' is not a member of 'PyOpenCV_Converter<cv::GOpaqueDesc, void>'
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp: In instantiation of 'PyObject* pyopencv_from(const T&) [with T = cv::GKernelPackage; PyObject = _object]':
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:964:25: required from here
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp:60:75: error: 'from' is not a member of 'PyOpenCV_Converter<cv::GKernelPackage, void>'
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp: In instantiation of 'PyObject* pyopencv_from(const T&) [with T = cv::gapi::wip::GOutputs; PyObject = _object]':
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:1025:25: required from here
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp:60:75: error: 'from' is not a member of 'PyOpenCV_Converter<cv::gapi::wip::GOutputs, void>'
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp: In instantiation of 'bool pyopencv_to(PyObject*, T&, const ArgInfo&) [with T = cv::GCompileArg; PyObject = _object]':
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp:322:25: required from 'bool pyopencv_to_generic_vec(PyObject*, std::vector<_Tp>&, const ArgInfo&) [with Tp = cv::GCompileArg; PyObject = _object]'
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:286:35: required from here
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp:57:94: error: 'to' is not a member of 'PyOpenCV_Converter<cv::GCompileArg, void>'
57 | bool pyopencv_to(PyObject* obj, T& p, const ArgInfo& info) { return PyOpenCV_Converter<T>::to(obj, p, info); }
| ~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp: In instantiation of 'PyObject* pyopencv_from(const T&) [with T = cv::GCompileArg; PyObject = _object]':
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp:365:39: required from 'PyObject* pyopencv_from_generic_vec(const std::vector<_Tp>&) [with Tp = cv::GCompileArg; PyObject = _object]'
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/gapi/misc/python/pyopencv_gapi.hpp:292:37: required from here
/workspace/tensorrt-cpp-api-main/scripts/opencv-4.8.0/modules/python/src2/cv2_convert.hpp:60:75: error: 'from' is not a member of 'PyOpenCV_Converter<cv::GCompileArg, void>'
60 | PyObject* pyopencv_from(const T& src) { return PyOpenCV_Converter<T>::from(src); }
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
make[2]: *** [modules/python3/CMakeFiles/opencv_python3.dir/build.make:76: modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:10072: modules/python3/CMakeFiles/opencv_python3.dir/all] Error 2
make: *** [Makefile:166: all] Error 2
### Steps to reproduce
VERSION=4.8.0
test -e ${VERSION}.zip || wget https://github.com/opencv/opencv/archive/refs/tags/${VERSION}.zip
test -e opencv-${VERSION} || unzip ${VERSION}.zip
test -e opencv_extra_${VERSION}.zip || wget -O opencv_extra_${VERSION}.zip https://github.com/opencv/opencv_contrib/archive/refs/tags/${VERSION}.zip
test -e opencv_contrib-${VERSION} || unzip opencv_extra_${VERSION}.zip
cd opencv-${VERSION}
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D WITH_TBB=ON \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUBLAS=1 \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=ON \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=ON \
-D BUILD_opencv_apps=OFF \
-D BUILD_opencv_python2=OFF \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_PC_FILE_NAME=opencv.pc \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-${VERSION}/modules \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D INSTALL_C_EXAMPLES=OFF \
-D BUILD_EXAMPLES=OFF \
-D WITH_FFMPEG=ON \
-D CUDNN_INCLUDE_DIR=/usr/include \
-D CUDNN_LIBRARY=/usr/lib/x86_64-linux-gnu/libcudnn.so \
..
make -j 8
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug | low | Critical |
2,504,427,710 | ollama | Support for HuatuoGPT-Vision-7B | Can you support the HuatuoGPT-Vision-7B model, or do you have any advice on how I can deploy it on GPU?
Model: [FreedomIntelligence/HuatuoGPT-Vision-7B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-Vision-7B)
This model is built with Llava and Qwen2, and their CLI code is here: https://github.com/FreedomIntelligence/HuatuoGPT-Vision/blob/main/cli.py
| model request | low | Minor |
2,504,440,850 | PowerToys | [PowerToys Run] Increase User-Friendliness of PTRun Unit Converter Plugin | ### Description of the new feature / enhancement
The Unit Converter is a useful plugin, however, it lacks a help message when the keyword is activated.

Thanks to issue #33490, the Value Generator now offers excellent usage suggestions. I hope to see similar improvements like those made to the Value Generator.
I think it would be beneficial to list the usage of all supported transformations.
A possible form of the list:
- usage: {keyword} \<value> \<from> in \<to>
- cm, m, km, ft...
Length
- second, month, day, year...
Duration
- .....
### Scenario when this would be used?
This will be useful for quickly understanding all the features of this plugin without having to search for the documentation page.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Run-Plugin | low | Minor |
2,504,443,402 | PowerToys | Dvorak - QWERTY ⌘ for windows | ### Description of the new feature / enhancement
On macOS, the "Dvorak-QWERTY ⌘" keyboard layout offers a convenient feature that temporarily switches to QWERTY when the Command (⌘) key is held. This allows users to seamlessly use the same shortcuts as they would with QWERTY without changing their primary layout.
### Scenario when this would be used?
As a Dvorak user, I often find it annoying to use shortcuts. While there are some existing solutions for Windows, they do not work universally or seamlessly. A Dvorak-QWERTY toggle feature would greatly enhance my productivity and workflow by providing a convenient way to access QWERTY shortcuts when needed.
### Supporting information
_No response_
```[tasklist]
### Tasks
```
| Needs-Triage | low | Minor |
2,504,453,548 | ui | [bug]: Validation failed: - resolvedPaths: Required,Required,Required,Required,Required | ### Describe the bug
when I init
---
✔ Preflight checks.
✔ Verifying framework. Found Vite.
✔ Validating Tailwind CSS.
✔ Validating import alias.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Validation failed:
- resolvedPaths: Required,Required,Required,Required,Required
---

### Affected component/components
init
### How to reproduce
1. yarn 4.4
2. vite
3. js
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
{
"packageManager": "yarn@4.4.1",
"name": "boilerplate_fe",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"lint": "eslint .",
"preview": "vite preview"
},
"dependencies": {
"react": "^18.3.1",
"react-dom": "^18.3.1"
},
"devDependencies": {
"@eslint/js": "^9.9.0",
"@types/node": "^22.5.3",
"@types/react": "^18.3.3",
"@types/react-dom": "^18.3.0",
"@vitejs/plugin-react-swc": "^3.5.0",
"autoprefixer": "^10.4.20",
"eslint": "^9.9.0",
"eslint-plugin-react": "^7.35.0",
"eslint-plugin-react-hooks": "^5.1.0-rc.0",
"eslint-plugin-react-refresh": "^0.4.9",
"globals": "^15.9.0",
"postcss": "^8.4.44",
"tailwindcss": "^3.4.10",
"vite": "^5.4.1"
}
}
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,504,464,512 | PowerToys | Workspaces - FTH not detected | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Open Firefox in left half of screen, Family Tree Heritage in right half, capture.
[PowerToysReport_2024-09-04-07-50-17.zip](https://github.com/user-attachments/files/16862006/PowerToysReport_2024-09-04-07-50-17.zip)
### ✔️ Expected Behavior
Both should be captured
### ❌ Actual Behavior
Only Firefox (and Powertoys, minimised) were
### Other Software
Firefox 129.0.2;
Family Tree Heritage Gold
FTH Version 16: Build 16.0.12
Powered by Ancestral Quest
Copyright (c) 1994-2023 Incline Software, LC
Published by Individual Software, Inc. | Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Workspaces | low | Minor |
2,504,472,379 | PowerToys | [Workspaces] Take into account virtual desktops | ### Description of the new feature / enhancement
Only the active virtual desktop is included in workspaces. Please consider taking into account other virtual desktops as well.
### Scenario when this would be used?
I have usually one virtual desktop for (deep) work and another virtual desktop for personal stuff like music, mails etc. This enabled a clear separation (e.g. work vs other things). It would be great if PowerToys Workspaces could support virtual desktops as well.
### Supporting information
N/A | Needs-Spec,Needs-Triage,Tracker,Product-Workspaces | medium | Critical |
2,504,497,578 | vscode | SCM - multi-line commit message not shown in the graph hover | Steps to Reproduce:
1. Create a commit with a multi-line commit message
2. Open the "Source Control Graph" view and hover over the commit
3. Only the first line of the commit message is shown
| bug,verification-found,scm | low | Minor |
2,504,539,498 | PowerToys | Paste with Hyperlink | ### Description of the new feature / enhancement
In MS Office products like Powerpoint, Word etc there is a super useful option to include a hyperlink to objects like text and pics.
It would be great if PowerToys could enhance this functionality by including a Paste with Hyperlink option.
### Scenario when this would be used?
Copy and Paste is used a lot to extract info (text, pics) from other documents. For even uncomplicated tasks, there is often the need to compile information befor generating more information. Almost immediately, the need to return to the original document arises, to find future info.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,504,559,985 | PowerToys | FancyZones window type and identity recognition | ### Description of the new feature / enhancement
FancyZones seems unable to differentiate between main windows and pop-ups
### Scenario when this would be used?
I love using FancyZones, however, it seems the Child Snap feature is unable to recognise the differences between pop-ups and main windows. For example, when using MS Excel, I'll snap the main window to zone 1 and then snap pop-up window 1 to zone 2 and pop-up window 2 to zone 3. However, after closing excel and opening it again, the main window will launch in zone 3 in a tiny window, and all subsequent pops will also open in zone 3 as if it forgot where those pop-ups were previously.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,504,591,418 | PowerToys | Most modules disabled after updating | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
Unknown, ran update within application and waited for it to finish. Tried restarting but didn't change module enabled state.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
Most modules disabled, when almost all were enabled

### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Major |
2,504,594,527 | rust | taint on type error in `UniversalRegionsBuilder::compute_inputs_and_output` | As suggested by @oli-obk here https://github.com/rust-lang/rust/pull/129472#discussion_r1741246234, just to keep track that there is perhaps a better way to solve this than to ad-hoc check for errors.
| C-enhancement,A-diagnostics,T-types | low | Critical |
2,504,625,061 | PowerToys | Workspaces: Select a current working directory for starting applications | ### Description of the new feature / enhancement
In the settings for each application configuration in the workspace, we can specify the CLI arguments.
It would be also useful to be able to configure the current working directory as multiple applications depend on the current directory selection.
### Scenario when this would be used?
Whenever we need to select a specific working directory for an application.
Just as the CLI parameter, add the ability to change the current working directory in the workspace settings.
### Supporting information
_No response_ | Needs-Triage,Product-Workspaces | low | Minor |
2,504,665,747 | stable-diffusion-webui | [Bug]: Running out of VRAM AMD 7900 xt | ### Checklist
- [x] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Trying to run Txt2img on a 7900xt at a resolution of 540x960 and a 2x hires fix and i keep getting "RuntimeError: Could not allocate tensor with 18144080 bytes. There is not enough GPU video memory available!"
The below is my current cmd line args
COMMANDLINE_ARGS= --use-directml --port 80 --listen --enable-insecure-extension-access --no-half-vae
Any ideas on how to get this running smoothly?
### Steps to reproduce the problem
run any image generation at high ish resolutions.
### What should have happened?
Generate image without using more than the total vram
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
{
"Platform": "Windows-10-10.0.22631-SP0",
"Python": "3.10.6",
"Version": "v1.10.1-amd-5-gd8b7380b",
"Commit": "d8b7380b18d044d2ee38695c58bae3a786689cf3",
"Git status": "On branch master\nYour branch is up to date with 'origin/master'.\n\nChanges not staged for commit:\n (use \"git add <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n\tmodified: webui-user.bat\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\tvenv.old/\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")",
"Script path": "E:\\SD\\stable-diffusion-webui-directml",
"Data path": "E:\\SD\\stable-diffusion-webui-directml",
"Extensions dir": "E:\\SD\\stable-diffusion-webui-directml\\extensions",
"Checksum": "73533d0a0366e6ef83e2deeef5c879a5771e36bd91c85e0abe94fe10ca333a99",
"Commandline": [
"launch.py",
"--use-directml",
"--port",
"80",
"--listen",
"--enable-insecure-extension-access",
"--no-half-vae"
],
"Torch env info": {
"torch_version": "2.3.1+cpu",
"is_debug_build": "False",
"cuda_compiled_version": null,
"gcc_version": null,
"clang_version": null,
"cmake_version": null,
"os": "Microsoft Windows 11 Pro",
"libc_version": "N/A",
"python_version": "3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)",
"python_platform": "Windows-10-10.0.22631-SP0",
"is_cuda_available": "False",
"cuda_runtime_version": null,
"cuda_module_loading": "N/A",
"nvidia_driver_version": null,
"nvidia_gpu_models": null,
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"onnx==1.16.2",
"onnxruntime==1.19.0",
"onnxruntime-directml==1.19.0",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.3.1",
"torch-directml==0.2.4.dev240815",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.1",
"torchsde==0.2.6",
"torchvision==0.18.1"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture=9",
"CurrentClockSpeed=3394",
"DeviceID=CPU0",
"Family=107",
"L2CacheSize=4096",
"L2CacheSpeed=",
"Manufacturer=AuthenticAMD",
"MaxClockSpeed=3394",
"Name=AMD Ryzen 7 5800X3D 8-Core Processor ",
"ProcessorType=3",
"Revision=8450"
]
},
"Exceptions": [
{
"exception": "Could not allocate tensor with 18144080 bytes. There is not enough GPU video memory available!",
"traceback": [
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\call_queue.py, line 74, f",
"res = list(func(*args, **kwargs))"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\call_queue.py, line 53, f",
"res = func(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\call_queue.py, line 37, f",
"res = func(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\txt2img.py, line 109, txt2img",
"processed = processing.process_images(p)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\processing.py, line 849, process_images",
"res = process_images_inner(p)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\processing.py, line 1083, process_images_inner",
"samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\processing.py, line 1457, sample",
"return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\processing.py, line 1549, sample_hr_pass",
"samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_samplers_kdiffusion.py, line 187, sample_img2img",
"samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_samplers_common.py, line 272, launch_sampling",
"return func()"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_samplers_kdiffusion.py, line 187, <lambda>",
"samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\utils\\_contextlib.py, line 115, decorate_context",
"return func(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\k-diffusion\\k_diffusion\\sampling.py, line 594, sample_dpmpp_2m",
"denoised = model(x, sigmas[i] * s_in, **extra_args)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_samplers_cfg_denoiser.py, line 268, forward",
"x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\k-diffusion\\k_diffusion\\external.py, line 112, forward",
"eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\k-diffusion\\k_diffusion\\external.py, line 138, get_eps",
"return self.inner_model.apply_model(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_utils.py, line 22, <lambda>",
"setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_utils.py, line 34, __call__",
"return self.__sub_func(self.__orig_func, *args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_unet.py, line 50, apply_model",
"result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_utils.py, line 22, <lambda>",
"setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_utils.py, line 36, __call__",
"return self.__orig_func(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\models\\diffusion\\ddpm.py, line 858, apply_model",
"x_recon = self.model(x_noisy, t, **cond)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\models\\diffusion\\ddpm.py, line 1335, forward",
"out = self.diffusion_model(x, t, context=cc)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_unet.py, line 91, UNetModel_forward",
"return original_forward(self, x, timesteps, context, *args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\modules\\diffusionmodules\\openaimodel.py, line 802, forward",
"h = module(h, emb, context)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\modules\\diffusionmodules\\openaimodel.py, line 84, forward",
"x = layer(x, context)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_utils.py, line 22, <lambda>",
"setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_utils.py, line 34, __call__",
"return self.__sub_func(self.__orig_func, *args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_unet.py, line 96, spatial_transformer_forward",
"x = block(x, context=context[i])"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\modules\\attention.py, line 269, forward",
"return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\modules\\diffusionmodules\\util.py, line 123, checkpoint",
"return func(*inputs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\modules\\attention.py, line 272, _forward",
"x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_optimizations.py, line 393, split_cross_attention_forward_invokeAI",
"r = einsum_op(q, k, v)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_optimizations.py, line 367, einsum_op",
"return einsum_op_dml(q, k, v)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_optimizations.py, line 354, einsum_op_dml",
"return einsum_op_tensor_mem(q, k, v, (mem_reserved - mem_active) if mem_reserved > mem_active else 1)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_optimizations.py, line 336, einsum_op_tensor_mem",
"return einsum_op_slice_1(q, k, v, max(q.shape[1] // div, 1))"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_hijack_optimizations.py, line 308, einsum_op_slice_1",
"r[:, i:end] = einsum_op_compvis(q[:, i:end], k, v)"
]
]
},
{
"exception": "None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`",
"traceback": [
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_models.py, line 831, load_model",
"sd_model = instantiate_from_config(sd_config.model, state_dict)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_models.py, line 775, instantiate_from_config",
"return constructor(**params)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\models\\diffusion\\ddpm.py, line 563, __init__",
"self.instantiate_cond_stage(cond_stage_config)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\models\\diffusion\\ddpm.py, line 630, instantiate_cond_stage",
"model = instantiate_from_config(config)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\util.py, line 89, instantiate_from_config",
"return get_obj_from_str(config[\"target\"])(**config.get(\"params\", dict()))"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\repositories\\stable-diffusion-stability-ai\\ldm\\modules\\encoders\\modules.py, line 104, __init__",
"self.transformer = CLIPTextModel.from_pretrained(version)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\modules\\sd_disable_initialization.py, line 68, CLIPTextModel_from_pretrained",
"res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)"
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\transformers\\modeling_utils.py, line 3213, from_pretrained",
"resolved_config_file = cached_file("
],
[
"E:\\SD\\stable-diffusion-webui-directml\\venv\\lib\\site-packages\\transformers\\utils\\hub.py, line 425, cached_file",
"raise EnvironmentError("
]
]
}
],
"CPU": {
"model": "AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD",
"count logical": 16,
"count physical": 8
},
"RAM": {
"total": "16GB",
"used": "11GB",
"free": "5GB"
},
"Extensions": [
{
"name": "multidiffusion-upscaler-for-automatic1111",
"path": "E:\\SD\\stable-diffusion-webui-directml\\extensions\\multidiffusion-upscaler-for-automatic1111",
"commit": "22798f6822bc9c8a905b51da8954ee313b973331",
"branch": "main",
"remote": "https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111.git"
}
],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": " --use-directml --port 80 --listen --enable-insecure-extension-access --no-half-vae",
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"sd_model_checkpoint": "chilloutmix_NiPrunedFp32Fix.safetensors [fc2511737a]",
"sd_checkpoint_hash": "fc2511737a54c5e80b89ab03e0ab4b98d051ab187f92860f3cd664dc9d08b271"
},
"Startup": {
"total": 46.024452209472656,
"records": {
"initial startup": 0.12500858306884766,
"prepare environment/checks": 0.0,
"prepare environment/git version info": 0.6563024520874023,
"prepare environment/clone repositores": 0.28126049041748047,
"prepare environment/run extensions installers/multidiffusion-upscaler-for-automatic1111": 0.015625715255737305,
"prepare environment/run extensions installers": 0.015625715255737305,
"prepare environment": 75.71254301071167,
"launcher": 0.0020012855529785156,
"import torch": 0.0,
"import gradio": 0.0,
"setup paths": 0.0010001659393310547,
"import ldm": 0.0030002593994140625,
"import sgm": 0.0,
"initialize shared": 2.3184762001037598,
"other imports": 0.03450608253479004,
"opts onchange": 0.0,
"setup SD model": 0.0004999637603759766,
"setup codeformer": 0.0010004043579101562,
"setup gfpgan": 0.01700282096862793,
"set samplers": 0.0,
"list extensions": 0.0015003681182861328,
"restore config state file": 0.0,
"list SD models": 0.040509700775146484,
"list localizations": 0.0005002021789550781,
"load scripts/custom_code.py": 0.0055010318756103516,
"load scripts/img2imgalt.py": 0.0010004043579101562,
"load scripts/loopback.py": 0.0004999637603759766,
"load scripts/outpainting_mk_2.py": 0.0004999637603759766,
"load scripts/poor_mans_outpainting.py": 0.0005002021789550781,
"load scripts/postprocessing_codeformer.py": 0.0004999637603759766,
"load scripts/postprocessing_gfpgan.py": 0.0005002021789550781,
"load scripts/postprocessing_upscale.py": 0.0004999637603759766,
"load scripts/prompt_matrix.py": 0.0010001659393310547,
"load scripts/prompts_from_file.py": 0.0005002021789550781,
"load scripts/sd_upscale.py": 0.0004999637603759766,
"load scripts/xyz_grid.py": 0.0020003318786621094,
"load scripts/ldsr_model.py": 0.3000602722167969,
"load scripts/lora_script.py": 0.11002206802368164,
"load scripts/scunet_model.py": 0.02150416374206543,
"load scripts/swinir_model.py": 0.020003318786621094,
"load scripts/hotkey_config.py": 0.0,
"load scripts/extra_options_section.py": 0.0010001659393310547,
"load scripts/hypertile_script.py": 0.035008907318115234,
"load scripts/postprocessing_autosized_crop.py": 0.0010001659393310547,
"load scripts/postprocessing_caption.py": 0.0004999637603759766,
"load scripts/postprocessing_create_flipped_copies.py": 0.0005002021789550781,
"load scripts/postprocessing_focal_crop.py": 0.0020003318786621094,
"load scripts/postprocessing_split_oversized.py": 0.0005002021789550781,
"load scripts/soft_inpainting.py": 0.0010001659393310547,
"load scripts/tilediffusion.py": 0.044507503509521484,
"load scripts/tileglobal.py": 0.016003131866455078,
"load scripts/tilevae.py": 0.014503002166748047,
"load scripts/comments.py": 0.020005464553833008,
"load scripts/refiner.py": 0.0010006427764892578,
"load scripts/sampler.py": 0.0004999637603759766,
"load scripts/seed.py": 0.0004999637603759766,
"load scripts": 0.6036219596862793,
"load upscalers": 0.003500699996948242,
"refresh VAE": 0.0009999275207519531,
"refresh textual inversion templates": 0.0005002021789550781,
"scripts list_optimizers": 0.0010001659393310547,
"scripts list_unets": 0.0,
"reload hypernetworks": 0.0005002021789550781,
"initialize extra networks": 0.054009437561035156,
"scripts before_ui_callback": 0.003500699996948242,
"create ui": 0.4175848960876465,
"gradio launch": 4.347925186157227,
"add APIs": 0.008502006530761719,
"app_started_callback/lora_script.py": 0.0004999637603759766,
"app_started_callback": 0.0004999637603759766
}
},
"Packages": [
"accelerate==0.21.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohappyeyeballs==2.4.0",
"aiohttp==3.10.5",
"aiosignal==1.3.1",
"alembic==1.13.2",
"altair==5.4.1",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==4.0.3",
"attrs==24.2.0",
"blendmodes==2022",
"certifi==2024.8.30",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip#sha256=b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a",
"colorama==0.4.6",
"coloredlogs==15.0.1",
"colorlog==6.8.2",
"contourpy==1.3.0",
"cycler==0.12.1",
"datasets==2.21.0",
"deprecation==2.1.0",
"diffusers==0.30.2",
"dill==0.3.8",
"diskcache==5.6.3",
"einops==0.4.1",
"exceptiongroup==1.2.2",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.4.0",
"filelock==3.15.4",
"filterpy==1.4.5",
"flatbuffers==24.3.25",
"fonttools==4.53.1",
"frozenlist==1.4.1",
"fsspec==2024.6.1",
"ftfy==6.2.3",
"gitdb==4.0.11",
"GitPython==3.1.32",
"gradio==3.41.2",
"gradio_client==0.5.0",
"greenlet==3.0.3",
"h11==0.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.24.6",
"humanfriendly==10.0",
"idna==3.8",
"imageio==2.35.1",
"importlib_metadata==8.4.0",
"importlib_resources==6.4.4",
"inflection==0.5.1",
"intel-openmp==2021.4.0",
"Jinja2==3.1.4",
"jsonmerge==1.8.0",
"jsonschema==4.23.0",
"jsonschema-specifications==2023.12.1",
"kiwisolver==1.4.6",
"kornia==0.6.7",
"lark==1.1.2",
"lazy_loader==0.4",
"lightning-utilities==0.11.7",
"llvmlite==0.43.0",
"Mako==1.3.5",
"MarkupSafe==2.1.5",
"matplotlib==3.9.2",
"mkl==2021.4.0",
"mpmath==1.3.0",
"multidict==6.0.5",
"multiprocess==0.70.16",
"narwhals==1.6.2",
"networkx==3.3",
"numba==0.60.0",
"numpy==1.26.2",
"olive-ai==0.6.2",
"omegaconf==2.2.3",
"onnx==1.16.2",
"onnxruntime==1.19.0",
"onnxruntime-directml==1.19.0",
"open-clip-torch==2.20.0",
"opencv-python==4.10.0.84",
"optimum==1.21.4",
"optuna==4.0.0",
"orjson==3.10.7",
"packaging==24.1",
"pandas==2.2.2",
"piexif==1.1.3",
"Pillow==9.5.0",
"pillow-avif-plugin==1.4.3",
"pip==24.2",
"protobuf==3.20.2",
"psutil==5.9.5",
"pyarrow==17.0.0",
"pydantic==1.10.18",
"pydub==0.25.1",
"pyparsing==3.1.4",
"pyreadline3==3.4.1",
"python-dateutil==2.9.0.post0",
"python-multipart==0.0.9",
"pytorch-lightning==1.9.4",
"pytz==2024.1",
"PyWavelets==1.7.0",
"PyYAML==6.0.2",
"referencing==0.35.1",
"regex==2024.7.24",
"requests==2.32.3",
"resize-right==0.0.2",
"rpds-py==0.20.0",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scipy==1.14.1",
"semantic-version==2.10.0",
"sentencepiece==0.2.0",
"setuptools==69.5.1",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"spandrel==0.3.4",
"spandrel_extra_arches==0.1.1",
"SQLAlchemy==2.0.33",
"starlette==0.26.1",
"sympy==1.13.2",
"tbb==2021.13.1",
"tifffile==2024.8.30",
"timm==1.0.9",
"tokenizers==0.19.1",
"tomesd==0.1.3",
"torch==2.3.1",
"torch-directml==0.2.4.dev240815",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.1",
"torchsde==0.2.6",
"torchvision==0.18.1",
"tqdm==4.66.5",
"trampoline==0.1.2",
"transformers==4.43.4",
"typing_extensions==4.12.2",
"tzdata==2024.1",
"urllib3==2.2.2",
"uvicorn==0.30.6",
"wcwidth==0.2.13",
"websockets==11.0.3",
"xxhash==3.5.0",
"yarl==1.9.8",
"zipp==3.20.1"
]
}
### Console logs
```Shell
File "E:\SD\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "E:\SD\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "E:\SD\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "E:\SD\stable-diffusion-webui-directml\modules\processing.py", line 1083, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "E:\SD\stable-diffusion-webui-directml\modules\processing.py", line 1457, in sample
return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
File "E:\SD\stable-diffusion-webui-directml\modules\processing.py", line 1549, in sample_hr_pass
samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 187, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\SD\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "E:\SD\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 187, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_samplers_cfg_denoiser.py", line 268, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "E:\SD\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_unet.py", line 50, in apply_model
result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 36, in __call__
return self.__orig_func(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_unet.py", line 96, in spatial_transformer_forward
x = block(x, context=context[i])
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "E:\SD\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 123, in checkpoint
return func(*inputs)
File "E:\SD\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 393, in split_cross_attention_forward_invokeAI
r = einsum_op(q, k, v)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 367, in einsum_op
return einsum_op_dml(q, k, v)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 354, in einsum_op_dml
return einsum_op_tensor_mem(q, k, v, (mem_reserved - mem_active) if mem_reserved > mem_active else 1)
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 336, in einsum_op_tensor_mem
return einsum_op_slice_1(q, k, v, max(q.shape[1] // div, 1))
File "E:\SD\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 308, in einsum_op_slice_1
r[:, i:end] = einsum_op_compvis(q[:, i:end], k, v)
RuntimeError: Could not allocate tensor with 18144080 bytes. There is not enough GPU video memory available!
```
### Additional information
_No response_ | bug-report | low | Critical |
2,504,684,459 | svelte | Identifier has alreay been declared error when naming a type the same as a variable in svelte5 script | ### Describe the bug
cant have smae name as variable ...
after using [@sveltejs/vite-plugin-svelte@4.0.0-next.6](https://github.com/sveltejs/vite-plugin-svelte/releases/tag/%40sveltejs%2Fvite-plugin-svelte%404.0.0-next.6)
```
src/routes/+page.svelte:4:12 Identifier 'data' has already been declared
src/routes/+page.svelte:4:12
2 | import Counter from '$lib/Counter.svelte';
3 | type data = {}
4 | export let data;
^
5 | </script>
```
### Reproduction URL
```
<script lang="ts">
import Counter from '$lib/Counter.svelte';
type data = {}
export let data;
</script>
```
<h1>Hello {data.name}!</h1>
<Counter count={5} />
### Reproduction
[link](https://www.sveltelab.dev/?t=next&files=.%2Fsrc%2Froutes%2F%2Bpage.svelte#code=N4IgdA5glgLlEDsD2AnApiAXKAZlANhtiAMZIIxoUDOWoA1lAgCZYjUBua%2Bl%2BAhgCMAtAFcoADj4gANCAEicONCiwBtACwA2aZvHTxAZmkBOAKz6jARkvbrl6dfUOADPcvOH7z2%2Bcf39sxdjbxc-SwB2Fz1-T3MvdUjjaIiXOOdo3xdpBOzbSzjLDJ9k21dstMj3AtLsyssAJhNE4PdEyucrMvjSty9rDNyXXrC9LWzGrwMjMZjrEcHZ%2Bo9jdpaGqKGs-tTPHsHjNeGXCec06dK88xmbHfcWrvzdpqDPRvUrk88M6s2xg5C5sdbh0FjdrFc8qU0vdej8Vi9rG8Pi4ALoAXwxsjAAAd0DA4MoUCQ6CA8IQSWQKFQYLRiIwWGxONxeIJRBIpLJ5IplGoGp0TNIDE5rHU0vZxE54UlYoKnKY9AYJmDLMKUjF1MLlrKZVVPH4Qe4ivpenVenshdJ5YKlXlVe17BqssELdZbc9dYbbn0QZpTQjnMF4RarYrtYC7HrNhbHV5naqlZqCmFtOJ2mlky4nMGFW9gupRqNNQKXetPSKXJ0kzLs9aTPYSwnNq6nh6Hk53q8NuX3JXIx3wV8m%2BkW2ULcZpkW4wDCk2VZ5E6FZ%2BYa6HjG56kZQy7nCc0kaByvGtG3pdB7Nh82YtGs0fVb0bV6jge5TmHBvaw33XOB7M54ew482zniUUY3m%2By4TPU5jjlsUGohiaKyNifAkPQfAQGgYAAFbUOQJJkkQoCUpQNAkvSrCYOwXA8NwrJiJIMhyAoSgqJgqh8k6AHLKsoGWq%2BY56PCP6nCYU4xlqJb7t%2BgEiRGgL-tGozaPmgzqGJk4AY2HozkJYKji%2BtaXtJ9rZBpJbQWsC6Ii2jz-hxsbFpq%2Bq9AqBlbqq3wjvWJzOYO16cVO0rdrJF6ZApHleuajRBWqIUZB4-kOVO1mOLOnm-lOIa3mesl6d5napb%2B6UPOpAVhqFVbNuFBXfg8kLAr0YEdh2-zuJqAT3JEoZjIlWrJTu06aoGdTaNVKW1WaI5ivOtbNQUlmRVGJ7lQKDRXGZTlpToUknE%2BmS-ss9zTc%2BfGGe%2BSXapoqoFPuF55S4p45EJmS6Q%2Bv7ttBdQjSE1XGHKDrKUp2SuaZZUWldOozsBXkPTKOStItv1yh4YwqajIPiQKU4Qz%2BUNdHdezuI9dTpfd-alnd7S9oC5M3UOIFXm52WmMtaOA%2BUkaLBmYymFm7YaclpYTTZUKznzp3uSY-2DKYAOWgl-Ng-GM2-vVraNRzKWUxWXoowUdPQ5e%2BkS9lf2WnLpgo9opiborF3mTlxQw2N3rrhmvVYwBpPmkz2pmzbqlAyp26rXBmOC7Y9T3rxWXlYci7BUaHv9b5Wt7ansZHUu2qx-ZfUAQb%2BMM2ULX3O1okuF1vtjtLPPLbzK0tHBfTN846KYuwRIkswUDoCQMCoAAniSfDYtiYDMGANL4QQhGkOQJHT3STAUVRzK0cI9Eckx3KsRokQ5KGhhDvlbUizV5hWjk8POGmTzXIXTtG3L7gZ6M7RhB1cPfQ00e-sun0aqfGvl1E4SoNahm0vqcBKsAxflFnJSY0VEjJBNE8SBely7CVPrlDM70gRShhDqfUvlWrDl0uXSBBo7oBEEu0V8edVqxh4v-TshClo6AKEqI8kF%2BSBlqAIqhlV7qFQGuwq8jRNALUKtYGakDzqt09iAwyuCYaiOQRXCR0hwh5C-s9ZIEE3xWEUfw5RQjOYiNqhosh%2BVxDcR1jEXQ8DnjyOMWkJRB9QG6zUVY54Njax2I2LpUoMDRHOl4eBTia1YLuNdm%2BMIoTpKjQidE0w9x26IRAKPceAALGAABbfAM9yTEGItSWkDAV6MmoiyTe7JGJchYmoTQCUjBOPCMEX0%2BgnDiGCIEqRKjVSlHuHoTQnwWmZhHCMlRdDOY%2BnrFgyYTgxmcQmWfGIz0PArIumsuBqsXHRTLl%2Ba6s5tCaAWZ4H%2BA0OymFGk1SI2z867NJvqLxoinbnPKhZacCtBmwJ8JIi5BhRSOy7CI2%2Bi0-gguwTZZcniCEkx2K1WRNM8hKn1LbWaDzPj8N2UQ5sQZ0Xuj2WUT5kkGqeDWIkoZ2pVzx3DJYjMZLVRZ1fiOZZPg35IqIeTVl%2B1MGg3MeGXKot9Ga0eP8MVjiHRTm6tisG0K8ZP3BXfZsMxnDl02YKdoAoJk301R-HQOKdDLFQQnKOKj7EEoPvrUFZYVWQq4f8sFARDX9het9eEzLJmIKkuLR5ntdn6jphgn4v8uykouWy7STrPRCQtbHZFT5GWwsukeRVt1i4%2BFVTcP4Zro0WuBSagR0bcqjONXqlBAJ9oWoDZWn1elpmPIybIfAUABDd17mgfuQ8SQAGEkAiCpCgMATIaLFLnmU0iy8GSUTHbUtkDFOTMR5GxNZErpG5QfFQ2Zwq2lRuVMuZZxqyx1TOpuQ5046ju0goYslR40UqxRULcwaz3x0qvRY62DC%2BixUDH6s6r63go3sGkrYElCYDXcLtTRuaUYgdrJ8lZriyqth4dkQ6n6GXJKNTofKalgbZFPudfOw0I0azVcsbQSbLlfpUbaqhVrvSdAo8%2B6FF5C0nCstQrMRhjDKVA-cT2gZ6Vql-Th%2B9lpIhgYutEhy0T1CCfA6tKDYnMNGwKEcKhkpr0J1CvYLpp7vkI3UAhlDF0aPdnE7Nfds1ggya1HJrZKMQXIqskLM5nwvATJij-ZUWnEnqYqpoiFny846bmfqI0hngtkMqHBANnT6HcsrkOBKETLNqZbCYoxsG0HYcQwZordnEMXsFGsBBJL6zhJNuVqa-pYR1ac8Wp6yQ-O6MjM2hCHcUCDsoBUkAPc%2B4DxQMPYgABqfgg8%2BujpqXPAiFIF7lLIlUudc3%2BB1KXTvJpa6wjGdqrGm4iXgRUsGugo8eZlJUxdSYXdw4-hrAY-ZoT9aBxua9A%2BAN3m9s3aqm8eV7gW0gHG8hDCo7lBcBHUvXAs9FtUmnZU2da8aIbcXdvRpq72JdCWDVVjOHdJxAtYCf4VClS6eRQg596WbtatM0R89UThZGTqKqanjD%2BHE5YWUWOBhHt2oJpsHGcSGjuyE03S%2BQmnNA5B%2BhTC875uw9KUthHIByLVPXqjreDSV173Xft9z26Ti7uY3hgCp5wYnpCqdxBNLQyGbY3R-BpOn0%2BBq7zwUpRPLShyPb-L1n8EP351murDmmGRz-qOHyzjCWm-MzlrmuOn04dPZNRj0fpMvZczKd72lPteec5MgjKzwgwdPWa8zHqnjwmuFaqrkSpgtfaO2Tp%2Bfi2%2B9owVp3l7EG6ebGF98Adolytw4DhCsgID4CQAIPg%2BAwAkGoANhbSv4fQ9V6t5HC6tfLt3moK0Mjeip5SVqOaxCbW3b8Z1TWGrauW0Iw3UPeYri2pM9BWZcWOayyvycz0scH8c1pqli1G-pfs1PZgwtfsJKMCjIWLNAqAfE1HmAWNqvTn-ifsJGftKOIhCjch4LnBATJFAYRo6N1HAYKrmEQaQYppaGLv-s-tga-hfoAZfKBuAd-vdrgdAZaIrKGPfummQVJpaKNAfEpg5GgTJBgcAUwZaI0HgWwYQbfipA3MQTISocgcoQIZ-kWlQagU-qfhzJgc0FIQHLIcQvIZwUoWBAoZQZYezFoSIcfroegeKoTDjmqlcGAX8pARhkQU1NYSVpocIdQZxGIRUM4U2K4bmpfLgawaYd4WjL4WoZYbgXCigTQSEVIc-PEvfMwbVusgQd4bfizD4bWLcokSUaoXYUEaIY4eIWEesvjDyu4bkfgXuHEfZhOCVmjH4TwUYAHJUToeKqEeTKUFgnXjgc0XIQUbgW8DATwUIfwUUUUf0WkTUUMWGhAQ0b8DkSYV4XTlYdIfwQRl0WUTwdLMscEasRkapsmt%2Bh%2BMEB0natJBorjMuHmE1IAuNEblsTqEToBOCm8Yhp7t4gjOEG0BXMfOGrjJBvcZ-HRuoe7hCfjFYCipHvoOYOEJuACUfDAjFJNBGOoQ8SLm%2BD4Cxn2GBOIAwVZngikOHLqmCQVhQvTBsPCTCV6BFkfHxr3rZFiRdttC7sCNTsfLpD9u7iydoqXndFJKKmRqGOEFJCZKKWBG9lyZcfgmhlUQ4YMVcYFtkQiFTiEDzqydpOyY0KCT8j9DyXqcmo1tZFgfYZEt6OfnJDWsRnwuflgddvvh3t6HrHlr9nOLnDpCznEaqfQe6YYS-ubKkQFPSjQjNIGcbu2L6QAZGZkUSW4dkHmPafcGRqItTF4LHGzJaJfKSXcBqSpjqRmdBsSnytEAmj%2BomSGVqamdcXCdmYwf4gVqXNWvmgGQ2cck2SmWGUbKMY0ZmdGS3LElskNGlkil4nTuNKMaXtJLoZZnGalAAWQqOZsHgUxpqIERXgKHvtJAfudrlhdOkZGQYSFmfgRjsfkXTvEbASceQcHIEX-o-s2WGdee-uMfea0Y%2BbMTMS%2BeOaUecReaGRIYwZGZ-v%2BcyZweoW%2BYcfZnwdoTQZ%2BUOVBR6R-jITEbsQhQkVoUkfLGoe2RhXQVhRGWGcYXhQ%2BQRc%2BUReUYRuBfnJecOSIhEcpE0XBZ6IBYcd0TIQfJQWRbQXocMeEdzFERMbEXxYhQJYIcxe%2BehaJU4eJQnhmbBbRQBfRSQSBffpuEpVEooklPmi6QzhZrGb2R9NnkcueEQruQOQjH-s1klG8iRsJpZWdgmQOY%2BZBXURsa7t8Z-mhUZVOVjNJskKTGkOsZyuZaRp5TbtJfhSRffutJ0UgYxTwSWaReWeLjEvBN1h3NQDAHwHAMSMQENt2iNmNrgHwBwFAJSDiAgBABOnDovANmrmthrnRPUtvjtqoBuC0GuA4L0voB4HKdom4PyPUDUB4HNQ4EYBieKXyeII0PNR4INQ4MNYUMEOtdIJtZYNtaNaMNIHtbqKYOYAdUdZ0k4GdTKKtZNToNoBte7odW4KNVde9cEPUHoJ9SNbtadZaPta9dddom0qMuYBKFkX9TtcDUNV9SYEYDDaNWWcjfZrYCmNBIMGjcgTjXjSDQjTjTpJItoOEMrJdoKAZXDVtQjU4PjfDf9dTW9YzTjd7oKEzaDUfPtZTftcKMpJwgpfTTTSzQTSLQzbDfUK4RdR3kFBDDjQYL9aLbDQYCSRzYTUrSjd9RrUljNDjRMkLczbDWBgbaDSXiYNoCbereLSjcODjQ3JbYzeqicNJnTdrWrY7RbW7azUjV7W7aej7dbVrYHXERbYdQ4AHA7bDcfE4jjRNXrQLZHaNRaETfUHTWCA3lHJUAYJ7cHWMInUHcLbDbIpoMie7RLW7WBdokDOINRmXR9b7cHazRbUqJ0I0KzW3W3Q3YXTbdEK3XXVrf3YPcYJ8FHH4ARuGp-oEvnR7m%2BDnd3Z0gXYbaNZ-ksNEHnRXcGdWTjSpA0CcHPUvS0IrcHV4NPe%2BHjedDjC9Y3V3QfYPZLW4JfbPQ4JELHS-W7fxuMIPR9VffPXfW7eoL0dNf-QYoPeEK%2Bn4PMdPdZETZYPyHxj-bfSpPLXKPcb0FYIJK7cfZg7-UgzfaDWsgHb-XjV0CvXg1bb-TIvvaDYdfFgMBuH4OINEIw3-bnZ4Og1LIPSkPCCndg7fXbbw5zUClQwjcYqacI2Lb-TFkfb-aGNPY6DNfoF-ZdWQ4zdYIvabcYrgToJ2KnWwywxQ53cfeowjQMkTco8HbI27eEPYEtUSUWuEBtRAQ3gYB4DXfoOI0bdEYPWBpgQIwjZ-sPQ4B4-XcffYLHQZn44zaTRkG-cfeY7-fbSo0XQZNPWA-o3w0k6NYSaNH4Allwek6DazaAzY1cNIxk9fbnZdWU6DR-XLV7aMvtXwfLXgtfpE%2BXRUzg20yE0Q5k-cUJXo5LZw9zqA7Ez08HWbUSafYkIQ7famPtWqKQx03wwg4U99cYyzZEOIGE37bow3h2NnftdU%2BQ7fesLHfk6fW7RnX7fgbcb8RtaWAAzoItU3CswjdY4PYZmbDYK86ozM9QxChc0s4U37Uc4zQ8dPQYPEyc84HcoPccdvX88c8C8HWBjogUwjbA3C9s7naM%2BU2M0Qz8%2B0-i9C4Y8S8C4RoC2S2823cspddi7-YEKk6C1HV04GKS7MymJUECI86kzIc9kPcy8vbXUWjjYS6NfMJS8s1NVCzU7i8izI5BG4FkJK9Q0FAc-7I4QNCq0ixi11HKzqyzXoFPb7X4JAhNT2B8zK4TYGDSktYEuoEqIK30qYvqxI4g-S3i56-K16wa7DTkHrZDVYJ-RxNq4zUWj5oi266Dca0CwjVsk7FAxCs2Lo%2BC77ei1G3G0M94c46y1mz9abtPYk43aDFDRanrcstbE3JG7DR-esFDZoME%2Bs0XY2-M2NXffWI0xMA4%2Bm36%2By9G2K99bmzHV7QOz26NXTnI5ImO9IjOARsfEWh-Q654EM6OxDN21A-UK60XVaxm4zVs8qx2y49O4DQ0HKJqHKC29PcYL8v6702%2BMtdEvqH0qO4y37YEPkBbRFqYAfNW90yc%2B%2B5oDu0S9C068e1u3%2B969Q6HQw%2BuGcn0uzQ0M3Zw2Ha-donS3fWcr%2B02yjY8KG8B5B768vR6wR3u%2BB99SiuPTrGR4PZC8q6O9Pbspe9R3h8dZ%2BRxB2Hjcx37Ze0ByjZuwtceyx-cdhyJ%2BOVx7GyzQaBDLx3mKA1kV0qfdokEd%2B1-aB8bX7TOKfbdXC03rDEJ5SoJ7JPUH0iBKB-pzjW4xCzJ%2BHftRAYs0Q6B7k5e3e7HeNaJxZy%2B0BQZwOKzXWa48sq25feKLm1xjp8e8OxY9Zwx3lYW6Te59c0qzAy3nDAw8h3TaB7IkTQC2%2B8h1hwW2m3jYBKW9%2BNoYJzs6BqO%2BWHraBxOxeOTJw2fe-WE%2BLCrToD0uhlA0JPZyc3OPfQ4HTjG1S47aO6QWk0TGVyi8KLo-u-LQcUewpy5xXbZnjaJzyzl%2B7hC6J%2B87HcMjR-FxJ0XQGS2yh4t6E858fXlyFxbfMxjcR4TSF37XfTPfpxHXe7h5czzftwy6HRNxQy-au3xn2wjRSYJ%2BKKO4SSCEsAw6J3Gnl61zA3l7HVF9c3l1k-oKB10hkA9yi3d7u36%2BWoDXbTD7117dR5l1Y7j-hyY-R27SD-pz-p0LmypGiwc7N3bdEJszhkTQqJV7oynSYDT8HQ2x84Lzg8DVnaa2Z%2BKAK0-Uj9oi29t37Zd9bJe%2B56J58rNyXebb97ffDxXbmyG27YB1tTryRxLb9XskeyjH04J9j0N1T3j%2BO0D2C5dfmxEEwxzzR7kwZtDZTx9SinU43ReBB4R19z64799QAqA4TjY-KjBEsHEOp0Ly%2B3b4g3fUTqUaY--dE0PWH2b8dbm1c0Ly0Cbxt6dzg4j%2BX7fa1G6Gu5w79UeBg6bxi%2BNZXwd3x4X-B%2BZy58j0H5UAt6E9Z4CLN1aN38HTV4kL35Iy5M3xH8Dce5Y8fZkPpyyrP7Dcx3Z2Z1X-n50lP%2BHw7wf3x5fIJKAy9ce6NYExa5x8h1v%2B386GZ9RxHQaNLdvedwY7k3v-g874f6J-H9vy32-z4ZrBieSNP3msBCD8Mb%2B8zYUJ-hSB7VW%2BOge4qMgx7UZG2GSIqnNjnzkA8AEAbCIvkVxERlcq%2BTqhvg3ho5tcO%2BNiK2GtzPpbczxSvONEJh-gwE-JMsluBuqQxM0aiGFt-GJRMY3obYfQl9BsjVZyy1AxPJQwZwYJvYmwQJO8hqh3gQsNpWyLehURaQUSbuDgbjC4FpkeBA4eGHjBirioviwUfMr6W0FMlMi7uUPPEAPhvB0MQucPAVFRJ5xoQEgpJNmy0F-EdB4KYmKlm0ijQsygwdvMKl3AOI4I3Uf7E6FEw-Y-sfpUIcCDKySZ2cAiQ%2BJIj9AMoCgyWOIXkTVgvpToyiBGAENBCPwwUewdVFyhviwlkUVwHNC01VqpRgkj4CxFH0XKzh1BX8VhLjFYzuI8w8KZ3IlSoSiZKEYQhIYlTLJNDtIAWPxL2W%2BgYItIDwDoUoMhiaYn0vQhQe0P5Kdk-ix-QyIYJhhyCmcQsRQdsNtSPkHQRYeTEmxFS5F3KLrQRPQP4E%2BJrKukPzKfH1JlhOMkMOoNwjOxGRDE3QhrComTSkxFBwqfFP7keDqEYUWiXITZCVC7DCh1wn4IHiAhFwYYFQraFUJCA1CBEWqV6M8KEHekOsAw2gbWBjxGZIYykGoPcLSFg0gQ8iU8lZiVIPB5gxpc0mWQOhnkY0ew2EjkL%2BI-D3BNKc8GzjJGMjP0oaU5COFuiijphFIv4jBjIRpAwhJOZcmyNyhgjvQEI7mEqT%2BKsobgW5PYX7mTwCotE%2BIwUWmTnDqoPEDwiwCfE8LKhH0A4K%2BPCiKE3DURSqMob8FKCVDDUEYXEU9ENQEjLRJyUkRAhODDCOEUCIEOMLgSTDcocoqtD3jPLmiaoTAk5NaP4SpjGBYsWrNmI8y5jzhoVfKjYNTgRFqU3I-aPjGeKxQQIH6RBNAkSEakgcNISkDgLwF4RiAS%2BQgSvg6rr55cmuXqttkxwpDtwbsHxCQmrDVx2wV2EoUHgxGhFihBg08JlkbGthzgsMQEfJDtgSRJQsZa3HFFzGEkCxUwloTnAYR3DHIxKDIL2XuKhjfYRkO0L8AFh%2BxbKZQd-GTSnG5FlQT49UC%2BLHEygzSzpMINkLKC2N-cggxBMzB-RuhgoFwpWDGLJH%2BIQhv6aKlSNnBMp6SMkSqCCHAnjVg05GKujWWzh2QfxHeEuP%2BNVCLJVh9qI0J%2BK3FeAgJKJECYRNImwTYo8E%2B2GwRzRiMkUoyXThskvxsSn0JkWkoLBoliY9xmwM0gSOEkyJRJlE9CaIn2ywtAy7EkyPJhyxmJGggSeMWBClQiQCcJEmsSMMmTCk9MMo8kXWRwTCjlQCIgoa6ORH3xTwaI5VOUJ9FYi-RsiAMRGQ%2BFQYQoyJYMvGIkkJQPisWdjAKCRGRT%2BhppPuuKOTEYJcy1mDBCSMSQAY1GsCIwUKkzhgo-RDwWoRJRCxtY9MjWaEXGPaAKl7JzIqMf5nJGiZFWRox4rqJGI1TRiUkY3Dem8RkJippaUqRpJxzdTyJGcdYdpMQlgi4EKUn1C4KZz%2BSKMrI5YWykmlvYYhMYgKQsD0F-Fph%2BzHSVNwjzig%2BMm6Q7Mnn5Q%2BBGsGCPkTcCGG5iLWsI00ZsWyn6gHSbcMfCAHqqUAsBCAdsavm7HzxexK2JHAOJ6pbYMce8cQeNDxxSCqpno%2B1JBhzTJ4IhLAr0mwN5JC5ShMMocH4LdEojEUxQq8Py29BuDwZSeSGSVNWk4SDMGQ70KMJBDyIARqmcaZoJyhZT8Zj09SkLAhlp5zwNY7IZdKjxczgSpKSmWuLWkLlLxWsHqdQmES7DVwygnwbDMhQIYYIk5Ccs9IQhAA)
### Logs
```shell
-
```
### System Info
```shell
-
```
| bug | low | Critical |
2,504,695,641 | awesome-mac | 🎉 Add DevonThink | ### 🪩 Provide a link to the proposed addition
https://www.devontechnologies.com/apps/devonthink
### 😳 Explain why it should be added
DevonThink has been around for many years. It is very mature, fully featured, and has features that is not available in any other software packager on macOS. I was surprised it wasn't on the list already.
Thanks for a great Awesome-list!
### 📖 Additional context

### 🧨 Issue Checklist
- [X] I have checked for other similar issues
- [X] I have explained why this change is important
- [X] I have added necessary documentation (if appropriate) | addition | low | Minor |
2,504,702,350 | yt-dlp | [facebook:reel] Extract full description | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
How to download video from Facebook Reel with full description by use --extractor-descriptions?
Other social media is not a problem, but Facebook is not full description.
Ex. this link https://www.facebook.com/reel/1184723189500416
- Full: Can't say it enough...YOU GUYS are doing amazing things for Peter. Therapy starts September 7th and goes until September 27th.The fundraiser ends tonight at 12am EST and all future purchases/donations will be set aside for his Spring session.We love you and are incredibly grateful for your support and love for Peter and our family.
- When i downloaded: Can't say it enough...YOU GUYS are doing amazing things for Peter. Therapy starts September 7th and goes until September 27th.The fundraiser ends tonight...
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU --write-description https://www.facebook.com/reel/1184723189500416
[debug] Command-line config: ['-vU', '--write-description', 'https://www.facebook.com/reel/1184723189500416']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds [f0bb28504] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.3.2-2021-02-02-full_build-www.gyan.dev, ffprobe 4.2.1, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.09.02.232855/SHA2-256SUMS
Current version: nightly@2024.08.19.232821 from yt-dlp/yt-dlp-nightly-builds
Latest version: nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds
Current Build Hash: 6848bc8023593441d18ac46e1d638ad87b1bb80c14622bbaa66fd8282549243b
Updating to nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds ...
[debug] Downloading yt-dlp.exe from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.09.02.232855/yt-dlp.exe
Updated yt-dlp to nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds
[debug] Restarting: C:\bin\yt-dlp.exe -vU --write-description https://www.facebook.com/reel/1184723189500416
[debug] Command-line config: ['-vU', '--write-description', 'https://www.facebook.com/reel/1184723189500416']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds [e8e6a982a] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.3.2-2021-02-02-full_build-www.gyan.dev, ffprobe 4.2.1, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1832 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds)
[facebook:reel] Extracting URL: https://www.facebook.com/reel/1184723189500416
[facebook] Extracting URL: https://m.facebook.com/watch/?v=1184723189500416&_rdr
[facebook] 1184723189500416: Downloading webpage
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 1184723189500416: Downloading 1 format(s): 1675931759859977v+527464266312477a
[info] Writing video description to: 15K views · 995 reactions | Can't say it enough...YOU GUYS are doing amazing things for Peter. Therapy starts September 7th and goes until September 27th.The fundraiser ends tonight at 12am EST and all future purchases⧸donations will be set aside for his Spring session.We love you and are incredibly grateful for your support and love for Peter and our family. | Perfect Peter | Perfect Peter · Original audio [1184723189500416].description
ERROR: Cannot write video description file 15K views · 995 reactions | Can't say it enough...YOU GUYS are doing amazing things for Peter. Therapy starts September 7th and goes until September 27th.The fundraiser ends tonight at 12am EST and all future purchases⧸donations will be set aside for his Spring session.We love you and are incredibly grateful for your support and love for Peter and our family. | Perfect Peter | Perfect Peter · Original audio [1184723189500416].description
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 4303, in _write_description
FileNotFoundError: [Errno 2] No such file or directory: "15K views · 995 reactions | Can't say it enough...YOU GUYS are doing amazing things for Peter. Therapy starts September 7th and goes until September 27th.The fundraiser ends tonight at 12am EST and all future purchases⧸donations will be set aside for his Spring session.We love you and are incredibly grateful for your support and love for Peter and our family. | Perfect Peter | Perfect Peter · Original audio [1184723189500416].description"
```
| site-enhancement,triage | low | Critical |
2,504,716,500 | node | Expose `setMaxListeners` property on `AbortSignal` | ### What is the problem this feature will solve?
Right now, there is no stable way to change the `maxEventTargetListeners` of an `AbortSignal` without importing from `node:events`.
This makes it impossible to write isomorphic library code that adds more than the default 10 event listeners, without having to resort to workarounds such as
```js
/**
* A workaround to set the `maxListeners` property of a node EventEmitter without having to import
* the `node:events` module, which would make the code non-portable.
*/
function setMaxListeners(maxListeners: number, emitter: any) {
const key = Object.getOwnPropertySymbols(new AbortController().signal).find(
(key) => key.description === "events.maxEventTargetListeners"
);
if (key) emitter[key] = maxListeners;
}
```
This relies on the `events.maxEventTargetListeners` symbol description, which, to my knowledge is not part of any public API and could change at any time.
### What is the feature you are proposing to solve the problem?
Add a `AbortSignal.setMaxListeners` instance method so isomorphic code could do something like
```js
if ('setMaxListeners' in signal) signal.setMaxListeners(20)
```
### What alternatives have you considered?
Alternatively, if this is not an option as it could pollute a spec-defined object with additional properties, use `Symbol.for` instead of `new Symbol` for `kMaxEventTargetListeners` and make that a part of the official api.
---
I would be willing to implement this feature. | feature request,web-standards | medium | Major |
2,504,751,971 | deno | Bug: npm `postinstall` script of workspace member never run | If a workspace member has a `package.json` with a `postinstall` script, it will never be executed.
## Steps to reproduce
1. Run `mkdir foo`
2. Add a `foo/package.json` with these contents:
```json
{
"scripts": {
"postinstall": "echo 'postinstall'"
},
"dependencies": {
"preact": "^10.23.2"
}
}
```
3. Add a `deno.json` at the root with these contents:
```json
{
"workspace": ["./foo"]
}
```
4. Run `deno install --allow-scripts`
The `postinstall` script is never executed.
Version: Deno 2.0.0-rc.0+ce6b675 | bug,node compat,workspaces | low | Critical |
2,504,755,519 | material-ui | Constant refreshing of website in firefox firefox version 129.0.2 (64-bit) | ### Steps to reproduce
Windows 11, (windows 11, firefox version 129.0.2 (64-bit))
- Occurs with and without extensions installed (adblock, ublock etc)
Go to any page on the mui website, the website will constantly refresh every second and the following logs are repeated in the console log

https://github.com/user-attachments/assets/f42ec726-91e3-4449-93df-f464e0d5d075
### Current behavior
_No response_
### Expected behavior
_No response_
### Context
_No response_
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: refresh flicker website | out of scope,docs | low | Critical |
2,504,761,145 | deno | Bug: npm `postinstall` script of dependency in workspace member not executed | The `postinstall` script of a third party dependency of a workspace member that has a `package.json` is never executed.
## Steps to reproduce
1. Clone https://github.com/zemili-group/moonrepoV3
2. Run `deno install --allow-scripts`
3. Run `cd apps/zemili/frontend`
4. Run `deno task dev`
Output:
```sh
▲ [WARNING] Cannot find base config file "./.svelte-kit/tsconfig.json" [tsconfig.json]
tsconfig.json:2:13:
2 │ "extends": "./.svelte-kit/tsconfig.json",
╵ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
Related https://github.com/denoland/deno/issues/25416
Version: Deno 2.0.0-rc.0+ce6b675
| bug,node compat,workspaces | low | Critical |
2,504,766,351 | pytorch | "TypeError: unhashable type: non-nested SymInt" with `torch.compile` | ### 🐛 Describe the bug
MWE:
```python
import torch
@torch.compile
def topk(
A: torch.Tensor,
B: torch.Tensor,
A_idx: torch.Tensor,
B_idx: torch.Tensor,
):
k_ = min(50, B.size(0))
dist = torch.cdist(A, B)
dist.diagonal(int(A_idx - B_idx)).fill_(torch.inf)
return dist.topk(k_, 1, False)
torch.set_default_device("cuda:0")
A = torch.randn(500, 100)
B = torch.randn(501, 100)
A_idx = torch.tensor(0)
B_idx = torch.tensor(1)
res = topk(A, B, A_idx, B_idx)
B = torch.randn(5, 100)
res = topk(A, B, A_idx, B_idx)
```
`torch.compile` will fail at the last line, i.e., the second `topk` call:
```python
W0904 17:07:39.823000 140480204642112 torch/_dynamo/variables/tensor.py:715] [0/0] Graph break from `Tensor.item()`, consider setting:
W0904 17:07:39.823000 140480204642112 torch/_dynamo/variables/tensor.py:715] [0/0] torch._dynamo.config.capture_scalar_outputs = True
W0904 17:07:39.823000 140480204642112 torch/_dynamo/variables/tensor.py:715] [0/0] or:
W0904 17:07:39.823000 140480204642112 torch/_dynamo/variables/tensor.py:715] [0/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
W0904 17:07:39.823000 140480204642112 torch/_dynamo/variables/tensor.py:715] [0/0] to include these operations in the captured graph.
W0904 17:07:39.823000 140480204642112 torch/_dynamo/variables/tensor.py:715] [0/0]
/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:150: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py:663: UserWarning: Graph break due to unsupported builtin None.Tensor.diagonal. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
torch._dynamo.utils.warn_once(msg)
Traceback (most recent call last):
File "<private_work_dir>/_topk_compile_test.py", line 25, in <module>
res = topk(A, B, A_idx, B_idx)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "<private_work_dir>/_topk_compile_test.py", line 5, in topk
@torch.compile
File "<private_work_dir>/_topk_compile_test.py", line 14, in torch_dynamo_resume_in_topk_at_14
dist.diagonal(int(A_idx - B_idx)).fill_(torch.inf)
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1116, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 948, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 472, in __call__
return _compile(
^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_utils_internal.py", line 84, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 817, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 636, in compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1185, in transform_code_object
transformations(instructions, code_options)
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 178, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 582, in transform
tracer.run()
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2451, in run
super().run()
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 893, in run
while self.step():
^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 805, in step
self.dispatch_table[inst.opcode](self, inst)
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2642, in RETURN_VALUE
self._return(inst)
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2627, in _return
self.output.compile_subgraph(
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1123, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/root/micromamba/envs/py311/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1318, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1409, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1390, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/__init__.py", line 1951, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1505, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 69, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 954, in aot_module_simplified
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 687, in create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 125, in aot_dispatch_base
flat_fn, flat_args, fw_metadata = pre_compile(
^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1939, in pre_compile
flat_fn, flat_args, fw_metadata = wrapper.pre_compile(
^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 948, in pre_compile
flat_args_with_synthetic_bases, synthetic_base_info = merge_view_inputs(
^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1336, in merge_view_inputs
arg_to_old_idx_map = {arg: i for (i, arg) in enumerate(fwd_inputs)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1336, in <dictcomp>
arg_to_old_idx_map = {arg: i for (i, arg) in enumerate(fwd_inputs)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/micromamba/envs/py311/lib/python3.11/site-packages/torch/__init__.py", line 448, in __hash__
raise TypeError("unhashable type: non-nested SymInt")
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
TypeError: unhashable type: non-nested SymInt
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
If I remove the line `@torch.compile` or `dist.diagonal(...)`, it works well as usual.
### Versions
```python
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: TencentOS Server 4.2 (x86_64)
GCC version: (Tencent Compiler 12.3.1.2) 12.3.1 20230912 (TencentOS 12.3.1.2-3)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.38
Python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.241-1-tlinux4-0017.4-x86_64-with-glibc2.38
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.239.06
cuDNN version: Probably one of the following:
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7K62 48-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 126%
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5190.64
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu @zou3519 @bdhirsh | triaged,oncall: pt2,module: dynamic shapes,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,504,780,876 | vscode | Merge editor code lens not aligned | The code lenses should be aligned:

```json
{
"languageId": "typescript",
"base": "export class MultiDiffEditorViewModel extends Disposable {\n\tprivate readonly _documents = observableFromValueWithChangeEvent(this.model, this.model.documents);\n\n\tpublic readonly items = mapObservableArrayCached(this, this._documents, (d, store) => store.add(this._instantiationService.createInstance(DocumentDiffItemViewModel, d, this)))\n\t\t.recomputeInitiallyAndOnChange(this._store);\n\n\tpublic readonly focusedDiffItem = derived(this, reader => this.items.read(reader).find(i => i.isFocused.read(reader)));\n\tpublic readonly activeDiffItem = derivedObservableWithWritableCache<DocumentDiffItemViewModel | undefined>(this, (reader, lastValue) => this.focusedDiffItem.read(reader) ?? lastValue);\n",
"input1": "export class MultiDiffEditorViewModel extends Disposable {\n\tprivate readonly _documents = observableFromValueWithChangeEvent(this.model, this.model.documents);\n\n\tprivate readonly _documentsArr = derived(this, reader => {\n\t\tconst result = this._documents.read(reader);\n\t\tif (result === 'loading') {\n\t\t\treturn [];\n\t\t}\n\t\treturn result;\n\t});\n\n\tpublic readonly isLoading = derived(this, reader => this._documents.read(reader) === 'loading');\n\n\tpublic readonly items = mapObservableArrayCached(this, this._documentsArr, (d, store) => store.add(this._instantiationService.createInstance(DocumentDiffItemViewModel, d, this)))\n\t\t.recomputeInitiallyAndOnChange(this._store);\n\n\tpublic readonly focusedDiffItem = derived(this, reader => this.items.read(reader).find(i => i.isFocused.read(reader)));\n\tpublic readonly activeDiffItem = derivedObservableWithWritableCache<DocumentDiffItemViewModel | undefined>(this, (reader, lastValue) => this.focusedDiffItem.read(reader) ?? lastValue);\n",
"input2": "export class MultiDiffEditorViewModel extends Disposable {\n\tprivate readonly _documents: IObservable<readonly RefCounted<IDocumentDiffItem>[]> = observableFromValueWithChangeEvent(this.model, this.model.documents);\n\n\tpublic readonly items: IObservable<readonly DocumentDiffItemViewModel[]> = mapObservableArrayCached(\n\t\tthis,\n\t\tthis._documents,\n\t\t(d, store) => store.add(this._instantiationService.createInstance(DocumentDiffItemViewModel, d, this))\n\t)\n\t\t.recomputeInitiallyAndOnChange(this._store);\n\n\tpublic readonly focusedDiffItem = derived(this, reader => this.items.read(reader).find(i => i.isFocused.read(reader)));\n\tpublic readonly activeDiffItem = derivedObservableWithWritableCache<DocumentDiffItemViewModel | undefined>(this, (reader, lastValue) => this.focusedDiffItem.read(reader) ?? lastValue);\n",
"result": "export class MultiDiffEditorViewModel extends Disposable {\r\n\tprivate readonly _documents: IObservable<readonly RefCounted<IDocumentDiffItem>[]> = observableFromValueWithChangeEvent(this.model, this.model.documents);\r\n\r\n\tpublic readonly items = mapObservableArrayCached(this, this._documents, (d, store) => store.add(this._instantiationService.createInstance(DocumentDiffItemViewModel, d, this)))\r\n\t\t.recomputeInitiallyAndOnChange(this._store);\r\n\r\n\tpublic readonly focusedDiffItem = derived(this, reader => this.items.read(reader).find(i => i.isFocused.read(reader)));\r\n\tpublic readonly activeDiffItem = derivedObservableWithWritableCache<DocumentDiffItemViewModel | undefined>(this, (reader, lastValue) => this.focusedDiffItem.read(reader) ?? lastValue);\r\n",
"initialResult": "export class MultiDiffEditorViewModel extends Disposable {\r\n\tprivate readonly _documents: IObservable<readonly RefCounted<IDocumentDiffItem>[]> = observableFromValueWithChangeEvent(this.model, this.model.documents);\r\n\r\n\tpublic readonly items = mapObservableArrayCached(this, this._documents, (d, store) => store.add(this._instantiationService.createInstance(DocumentDiffItemViewModel, d, this)))\r\n\t\t.recomputeInitiallyAndOnChange(this._store);\r\n\r\n\tpublic readonly focusedDiffItem = derived(this, reader => this.items.read(reader).find(i => i.isFocused.read(reader)));\r\n\tpublic readonly activeDiffItem = derivedObservableWithWritableCache<DocumentDiffItemViewModel | undefined>(this, (reader, lastValue) => this.focusedDiffItem.read(reader) ?? lastValue);\r\n"
}
``` | bug,merge-editor | low | Minor |
2,504,822,259 | langchain | Python 3.13 needs Numpy > 2.0 | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
If you attempt to install numpy<2.0.0,>=1.26.0 you will get issues on Python 3.13
numpy>=2.0.0 works fine.
### Error Message and Stack Trace (if applicable)
https://github.com/numpy/numpy/issues/24318
### Description
The numpy dependency needs bumping up to >=2.0.0
### System Info
Python 3.13-dev on Mac OS M1 | investigate,todo | medium | Critical |
2,504,837,635 | go | log/slog: examples can't be run on playground | ### What is the URL of the page with the issue?
https://pkg.go.dev/log/slog@go1.23.0#example-Handler-LevelHandler
### What is your user agent?
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36
### Screenshot
_No response_
### What did you do?
When I run the example, it will panic with the error
### What did you see happen?
```
package play
prog.go:6:2: use of internal package log/slog/internal/slogtest not allowed
```
### What did you expect to see?
the example should output the right result | Documentation,NeedsFix,FixPending | low | Critical |
2,504,852,337 | PowerToys | PowerRename font is adjusted. | ### Description of the new feature / enhancement
ui box can be customized to adjust the font size, without changing the system Settings.
ui框可以自定义调整字体大小吗,在不改变系统设置下
### Scenario when this would be used?
As early as possible, it feels smaller after the update than before, and the display is not coordinated under different screens.
尽可能早,感觉更新后比以前小,在不同屏幕下显示不协调
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,504,906,949 | godot | Clicking on a button in Select Screen popup prints an error | ### Tested versions
4.4.dev1
4.1.2.stable
### System information
Fedora Linux 40 (KDE Plasma) - Wayland - Vulkan (Forward+)
### Issue description
When clicking on a button in this menu popup

Prints this error:
platform/linuxbsd/x11/display_server_x11.cpp:2084 - Condition "prev_parent == p_parent" is true.
Similar error is printed on Windows too
### Steps to reproduce
Right click on Make editor floating button
Click on one of the screen buttons
Error is printed in Output
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd,topic:editor,needs testing | low | Critical |
2,504,944,381 | next.js | Nested layout inheriting classes from outer layout on page load | ### Link to the code that reproduces this issue
https://github.com/Zain-ul-din/mdx-templates
### To Reproduce
I have no idea this is a feature or bug but it's not working as I want it to work. I'm using next js app router and my directory structure is following
```file
- app
- docs
page.tsx
layout.tsx
layout.tsx
page.tsx
```
I'm trying to add classes inside `docs/layout.tsx` body it's working on runtime but as I refresh the page it's being replaced by root layout `app/layout.tsx` body classes.
```tsx
// app/layout.tsx
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body className={`${inter.className} root`}>{children}</body>
</html>
);
}
```
```tsx
// app/docs/layout.tsx
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body className={`${inter.className} prose`}>
<h1 className="text-2xl my-4">
👑 This is root {`'/(docs)/docs/layout.tsx'`}
</h1>
{children}
</body>
</html>
);
}
```
https://github.com/user-attachments/assets/42ac0b7d-1327-4d05-9452-8258b039265c
### Current vs. Expected behavior
Nested layout classes should not be overridden by root layout.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 15742
Available CPU cores: 16
Binaries:
Node: 20.10.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.7 // Latest available version is detected (14.2.7).
eslint-config-next: 14.2.7
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
Repo Link: https://github.com/Zain-ul-din/mdx-templates
Thank you in advance | bug,Navigation | low | Critical |
2,504,977,275 | deno | Some permissions don't work on Android | Version: Deno 1.46 and lower
There are people running Lume on Android devices (Android 14, Termux, running in a PRoot) and they got the following error:

Deno is run with `--allow-all` flag but seems that it's not applied to `Deno.networkInterfaces()` method.
| bug | low | Critical |
2,505,005,116 | flutter | Calling VirtualDisplayController::resize() just before VirtualDisplayController::dispose() causes a crash | ### Steps to reproduce
In PlatformViewsController.java, there is a channelHandler object that implements the PlatformViewsChannel.PlatformViewsHandler interface. This interface includes two methods: resize() and dispose(). If resize() is called immediately before dispose(), it may lead to a crash.
1) When resize() is called with viewId = 0, it triggers a call to vdController.resize(). This call includes a Runnable object (lambda) as its third parameter, which is added to the message queue using View::postDelayed().



2) Then we call the dispose() function, which invokes vdController.dispose() and removes vdController from the HashMap.

3) After calling dispose(), the lambda previously added to the message queue using View::postDelayed() begins execution. Here, we attempt to retrieve the dimensions of the render target: vdController.getRenderTargetWidth(), vdController.getRenderTargetHeight().

4) In VirtualDisplayController::getRenderTargetWidth() we call renderTarget.getWidth()

5) Inside the SurfaceProducerPlatformViewRenderTarget, the producer object was released when vdController.dispose() was called, resulting in producer being null.

### Expected results
Calling VirtualDisplayController::resize() just before VirtualDisplayController::dispose() shouldn't cause a crash
### Actual results
The app is crashed.
### Code sample
<details open><summary>Code sample</summary>
No code. This issue is obvious from the attached screenshots of Flutter code from the steps to reproduce section.
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details>
<summary>Crash log</summary>
```
08-02 15:25:22.167 18044 18044 E AndroidRuntime: FATAL EXCEPTION: main
08-02 15:25:22.167 18044 18044 E AndroidRuntime: java.lang.NullPointerException: Attempt to invoke interface method 'int io.flutter.view.TextureRegistry$SurfaceProducer.getWidth()' on a null object reference
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at io.flutter.plugin.platform.SurfaceProducerPlatformViewRenderTarget.getWidth(SurfaceProducerPlatformViewRenderTarget.java:25)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at io.flutter.plugin.platform.VirtualDisplayController.getRenderTargetWidth(VirtualDisplayController.java:132)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at io.flutter.plugin.platform.PlatformViewsController$1.lambda$resize$0$io-flutter-plugin-platform-PlatformViewsController$1(PlatformViewsController.java:352)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at io.flutter.plugin.platform.PlatformViewsController$1$$ExternalSyntheticLambda0.run(Unknown Source:8)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at android.os.Handler.handleCallback(Handler.java:958)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at android.os.Handler.dispatchMessage(Handler.java:99)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at android.os.Looper.loopOnce(Looper.java:230)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at android.os.Looper.loop(Looper.java:319)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at android.app.ActivityThread.main(ActivityThread.java:8919)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at java.lang.reflect.Method.invoke(Native Method)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:578)
08-02 15:25:22.167 18044 18044 E AndroidRuntime: at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1103)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| platform-android,a: platform-views,P2,needs repro info,team-android,triaged-android | low | Critical |
2,505,016,781 | vscode | Error while fetching extensions. Failed to fetch | Type: <b>Feature Request</b>
Hi, I going to install an extensions , live server, but when i serch this text appears to me: Error while fetching extensions. Failed to fetch, what can i do ?
VS Code version: Code 1.92.2 (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<!-- generated by issue reporter --> | info-needed | medium | Critical |
2,505,047,775 | godot | Windows 11 Crashes Just opening a project/create node | ### Tested versions
4.3 stable
### System information
Windows 11
### Issue description
Constant errors in console showing the following:
```
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
OpenGL API 3.3.0 NVIDIA 560.94 - Compatibility - Using Device: NVIDIA - NVIDIA GeForce RTX 3080
Editing project: C:/Users/Joshua/Documents/tutorial
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
Vulkan 1.3.280 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3080
ERROR: Condition "err != VK_SUCCESS && err != VK_SUBOPTIMAL_KHR" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2344)
ERROR: Condition "err != VK_SUCCESS && err != VK_SUBOPTIMAL_KHR" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2344)
ERROR: Condition "err != VK_SUCCESS && err != VK_SUBOPTIMAL_KHR" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2344)
ERROR: Condition "err != VK_SUCCESS && err != VK_SUBOPTIMAL_KHR" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2344)
```
### Steps to reproduce
Open a project, click + Add Node
### Minimal reproduction project (MRP)
A brand new empty project | bug,needs testing,crash | low | Critical |
2,505,048,842 | flutter | `GoRouterState.of` behaves differently between `builder` and `errorBuilder` built widgets | ### What package does this bug report belong to?
go_router
### What target platforms are you seeing this bug on?
Web
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
go_router:
dependency: "direct main"
description:
name: go_router
sha256: "2ddb88e9ad56ae15ee144ed10e33886777eb5ca2509a914850a5faa7b52ff459"
url: "https://pub.dev"
source: hosted
version: "14.2.7"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
logging:
dependency: transitive
description:
name: logging
sha256: "623a88c9594aa774443aa3eb2d41807a48486b5613e67599fb4c41c0ad47c340"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
sdks:
dart: ">=3.5.1 <4.0.0"
flutter: ">=3.19.0"
```
</details>
### Steps to reproduce
See [dartpad example](https://dartpad.dev/?id=ee1702fd06d189fe31c71bae355d736e) and [gist](https://gist.github.com/jackd/ee1702fd06d189fe31c71bae355d736e)
Summary: `GoRoute`'s `build` and `GoRouter`'s `errorBuilder` have the same interface `Widget Function (BuildContext context, GoRouterState state)`. Inside a `build`, widget descendants can call `GoRouterState.of` to get the relevant state. This is not the case for descendants of widgets built with `errorBuilder`.
I'm not entirely certain if this is a bug, but it's certainly unexpected/surprising behaviour in my opinion.
### Expected results
`GoRouterState.of(context)` to return the `GoRouterState` that resulted in an error when called from within widgets created in `errorBuilder`.
### Actual results
Thrown `GoError` for the `errorBuilder` case.
### Code sample
- [dartpad](https://dartpad.dev/?id=ee1702fd06d189fe31c71bae355d736e)
- [gist](https://gist.github.com/jackd/ee1702fd06d189fe31c71bae355d736e)
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
Future<void> main() async {
WidgetsFlutterBinding.ensureInitialized();
runApp(const SimpleApp());
}
class SimpleApp extends StatelessWidget {
const SimpleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: GoRouter(routes: [
GoRoute(path: '/', builder: (context, state) => RouteExplorer(state))
], errorBuilder: (context, state) => RouteExplorer(state)));
}
}
class RouteExplorer extends StatelessWidget {
final GoRouterState state;
const RouteExplorer(this.state, {super.key});
@override
Widget build(BuildContext context) {
GoRouterState? fetchedState;
try {
fetchedState = GoRouterState.of(context);
} catch (_) {}
return Scaffold(
body: Column(mainAxisAlignment: MainAxisAlignment.center, children: [
...[
state,
fetchedState
].map((s) => Text(s == null ? 'Null state' : 'uri.path = ${s.uri.path}')),
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: ['/', '/foo']
.map((path) => ElevatedButton(
onPressed: () => context.go(path), child: Text(path)))
.toList())
]));
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.22631.4112], locale en-AU)
• Flutter version 3.24.1 on channel stable at C:\Users\thedo\dev\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (2 weeks ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[✓] Windows Version (Installed version of Windows is version 10 or higher)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\thedo\AppData\Local\Android\sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.9+0--11185874)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[✓] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.8.3)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.8.34330.188
• Windows 10 SDK version 10.0.22621.0
[✓] Android Studio (version 2023.2)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.9+0--11185874)
[✓] VS Code (version 1.92.2)
• VS Code at C:\Users\thedo\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.96.0
[✓] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4112]
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.113
• Edge (web) • edge • web-javascript • Microsoft Edge 128.0.2739.54
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| package,P2,p: go_router,team-go_router,triaged-go_router | low | Critical |
2,505,076,333 | langchain | safe_mode Parameter in ChatMistralAI Class Should Not Be Set to False by default or safe_prompt body parameter not sent to mistral api | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
llm = ChatMistralAI(
client=httpx.Client(
base_url=llm_creds.api_base,
headers={
"Content-Type": "application/json",
"Accept": "application/json",
},
timeout=120,
),
temperature=temperature,
max_tokens=max_tokens,
model_name="mistral-large",
safe_mode=False,
mistral_api_key=llm_creds.api_key,
max_retries=5,
), None
```
### Error Message and Stack Trace (if applicable)
error 422 from Mistral API called through langchain
### Description
I am encountering an issue with the ChatMistralAI class where the `safe_mode` parameter, which corresponds to the `safe_prompt` parameter in the [Mistral documentation](https://docs.mistral.ai/capabilities/guardrailing/), is set to False by default or sent by default to false. According to the Mistral API the safe_prompt parameter cannot be set to False. If this parameter is set to False, the API returns a 422 error. to disable the safe_prompt feature it should not be sent
## Steps to Reproduce:
Initialize an instance of the ChatMistralAI class with the default settings.
Call a method that triggers an API request with the safe_prompt parameter set to False.
Observe the 422 error returned by the Mistral API.
Expected Behavior:
The safe_mode parameter should not be set to False by default or sent by default. Instead, it should either be set to True by default or omitted from the request altogether, unless explicitly set by the user.
## Actual Behavior:
When the safe_mode parameter is set to False (the current default) or sent by default, the Mistral API returns a 422 error, indicating that the safe_prompt parameter should not be False.
## Suggested Fix:
Update the ChatMistralAI class to ensure that the safe_prompt body parameter is omitted from the request unless explicitly specified by the user with `safe_mode` enabled.
### System Info
mac and linux, and last version of langchain | 🤖:bug,investigate,🔌: mistralai | low | Critical |
2,505,097,787 | PowerToys | Screen Ruler Black Screen | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Screen ruler
### Steps to reproduce
Using the shortcut to launch the screen ruler and then selecting a ruler type. As soon as the measurements start showing up the screen goes completely black, apart from the line and values the Screen Ruler provides.
### ✔️ Expected Behavior
No black screen.
### ❌ Actual Behavior
Ruler is engaging with applications in the background and my wallpaper. But its all hidden behind a black screen. I can scroll to change the pixel sensitivity, and i can see the ruler find contrast in different part of screen where an application's window might have ended.
### Other Software
softwares running in BG
- Netflix (installed via Edge)
- Thorium
- Whatsapp
- Windows Phone Link | Issue-Bug,Needs-Triage,Product-Screen Ruler | low | Minor |
2,505,199,286 | deno | Feat: simplify lint configuration | From time to time a user mixes up `lint.exclude` with `lint.rules.exclude`. The most recent case of that is here https://discord.com/channels/684898665143206084/1280845408758927401/1280845408758927401
```json
{
"lint": {
"exclude": ["no-window"]
}
}
```
instead of:
```json
{
"lint": {
"rules": {
"exclude": ["no-window"]
}
}
}
```
It's an easy mistake to make given that we use the same schema for files and lint rules. I wonder if we can avoid that confusion. One idea could be to make the config more like other linters:
```json
{
"lint": {
"rules": {
"no-window": false
}
}
}
``` | config,lint | low | Minor |
2,505,237,469 | go | proposal: image: add (Rectangle).PointsBy for iterating over points as iter.Seq[Point] | ### Proposal Details
## Summary
I propose adding a new method `PointsBy` to `image.Rectangle`
that returns an `iter.Seq[image.Point]` for points within the rectangle.
This function would allow developers to directly iterate over points in a rectangle,
making the code more straightforward and reducing potential errors.
The `PointsBy` can be implemented as follows.
```go
func (r Rectangle) PointsBy(delta Point) iter.Seq[Point] {
return func(yield func(Point) bool) {
for y := r.Min.Y; y < r.Max.Y; y += delta.Y {
for x := r.Min.X; x < r.Max.X; x += delta.X {
if !yield(image.Pt(x, y)) {
return
}
}
}
}
}
```
By this function, the common pattern with a `image.Rectangle` `r`
```go
for y := r.Min.Y; y < r.Max.Y; y++ {
for x := r.Min.X; x < r.Max.X; x++ {
p := image.Pt(x, y)
// Do something with p.
}
}
```
could be simplified to
```go
for p := range r.PointsBy(image.Pt(1, 1)) {
// Do something with p.
}
```
## Background
- Iterating over points within a rectangle is a common operation in image processing tasks.
- This operation requires manually writing nested loops every time.
While not complex, it can indeed be cumbersome and repetitive.
- Since `image.Rectangle` does not necessarily start at `(0, 0)`, it's crucial to loop from `Min` to `Max`, as documented.
However, there are cases where developers mistakenly start from `0`.
Given this background, it may be beneficial to provide a dedicated API from the `image` package
to better address this demand for iterating over points.
## Benefits
- Provides a direct API for iterating **over `image.Point`s** not over Xs and Ys within an `image.Rectangle`.
- Eliminates the need for additional labels when breaking out of nested loops under specific conditions, such as
```go
pointsLoop:
for y := r.Min.Y; y < r.Max.Y; y++ {
for x := r.Min.X; x < r.Max.X; x++ {
// if (x, y) meets a certain condition
break pointsLoop
}
}
// then
```
- Supports non-unit steps (e.g., `y += 2`, `x += 3`) by taking a `delta` parameter, keeping flexibility.
- Aligns well with the `iter` package, enhancing integration with the iterator-based approach.
## Considerations
Beyond this proposal, a few additional considerations might be relevant.
- Whether to use `image.Point` to express delta (as `image.Rectangle.Size` does with `func() image.Point`) or `x, y int` (as `image.Image.At` does with `func(x, y int) color.Color`).
- The `iter` package was added recently, and its adoption and consensus within the community might still be evolving. | Proposal | low | Critical |
2,505,298,296 | vscode | [Accessibility] The dragging movement in split panel functionality needs a single pointer trigger. | - VS Code Version: 1.88.0
- OS Version: MacOS Sonoma (v14.6.1)
**[Issue]**
The split panel functionality uses a dragging movement with no other single-pointer trigger.
**[User Impact]**
Users with dexterity and mobility disabilities may be unable to perform dragging movements to use this functionality.
[Code Reference]
```
<div class="monaco-sash vertical" style="left: 288px;"></div>
```
**[Recommendation]**
Ensure all functionality that uses a dragging movement can be operated by a single pointer without dragging. One way to meet this requirement is to require users to perform a series of single-pointer, non-path-based interactions instead of dragging. Keyboard alternatives are not sufficient to meet this requirement. Exceptions include freeform drawing or games that require dragging.
Screenshot:


| bug,accessibility,sash-widget | low | Minor |
2,505,330,392 | vscode | [Accessibility] The Explorer section view needs role and name information. | - VS Code Version: 1.88.0
- OS Version: MacOS Sonoma (v14.6.1)
**[Issue]**
The tab panel for "Explorer (Ctrl+Shift+E) - You have 1 unsaved changes" tab lacks name and role information.
**[User Impact]**
Screen reader users will be unable to determine that these controls reveal panels of content and which panel is currently revealed.
**[Code Reference]**
```
<div class="part sidebar left pane-composite-part" id="workbench.parts.sidebar" **role="none"**
(...)>
</div>
```
**[Recommendation]**
Ensure page tabs provide state and role.
For tabs, the following information is expected:
- Each panel container must have role="tabpanel".
- If the tablist has a visible label, the tablist element must have aria-labelledby set to the ID of the labelling element. Otherwise, the tablist element must have aria-label set to the accessible name.
- Each tab must have aria-controls set to the ID of its corresponding tabpanel.
- Tabpanel elements must have aria-labelledby set to the ID of their corresponding tab.
- If the tablist is vertically oriented, it must have aria-orientation="vertical".
**Note**: Also when contributes any new extension webviews/tree views, ensure to have aria attributes corresponding to the Explorer tabpanel section.
**Steps to Reproduce:**
1. Open VSCode.deva and under Explorer panel section, the accessible `role="tabpanel"` and `aria-labelledby` is missing.
Screenshot:

| bug,accessibility | low | Minor |
2,505,361,829 | flutter | Incorrect highlight color for hovered and focused InkResponse | ### Steps to reproduce
1. Create `InkResponse`, `InkWell`, or a material button with an `overlayColor` that resolves to colors with different opacities for hovered and focused states.
2. a. If the `overlayColor` resolves hovered and focused at the same time as hovered (1st `InkResponse` in the code sample), hover while unfocused, focus while hovered, and unhover while focused.
b. If the `overlayColor` resolves hovered and focused at the same time as focused (2nd `InkResponse` in the code sample), focus while unhovered, hover while focused, and unfocus while hovered.
3. Compare the highlight color with the usual focused/hovered color and the color from the `overlayColor`.
### Expected results
The highlight color is correct and matches the color from the `overlayColor`.
### Actual results
The highlight color has the opacity of the other color from the `overlayColor`.
This is caused by 2 issues:
1. `_InkResponseState.updateHighlight` resolves the `overlayColor` using `statesController.value` that can have multiple states. As a result, the highlight for the focused state can get the color for the hovered state and vice versa.
2. `_InkResponseState.build` sets the correct color for the highlight, but `InkHighlight._alpha` isn't updated and the highlight is painted with the old opacity.
A possible workaround is to set the widget's `statesController` to a controller that stores only the last state. For example:
```dart
class SingleWidgetStateController extends WidgetStatesController {
@override
void update(WidgetState state, bool add) {
if (add) value.retainAll([state]);
super.update(state, add);
}
}
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Center(
child: Column(
mainAxisSize: MainAxisSize.min,
children: [
SizedBox.square(
dimension: 70,
child: InkResponse(
onTap: () {},
overlayColor: WidgetStateProperty.resolveWith(
(Set<WidgetState> states) {
if (states.contains(WidgetState.hovered)) {
return Colors.amber.withOpacity(0.2);
}
if (states.contains(WidgetState.focused)) {
return Colors.blue.withOpacity(0.8);
}
return Colors.transparent;
},
),
child: const Center(child: Text('1')),
),
),
SizedBox.square(
dimension: 70,
child: InkResponse(
onTap: () {},
overlayColor: WidgetStateProperty.resolveWith(
(Set<WidgetState> states) {
if (states.contains(WidgetState.focused)) {
return Colors.blue.withOpacity(0.8);
}
if (states.contains(WidgetState.hovered)) {
return Colors.amber.withOpacity(0.2);
}
return Colors.transparent;
},
),
child: const Center(child: Text('2')),
),
),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
Expected:
https://github.com/user-attachments/assets/28afcfdb-03f7-4264-8fd2-4778e3c161f0
Actual:
https://github.com/user-attachments/assets/933a3129-a4ce-4c9b-855e-0a07ed3cd004
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on macOS 14.6.1 23G93 darwin-arm64, locale en-UA)
• Flutter version 3.24.1 on channel stable at /Users/andrew/Applications/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (2 weeks ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/andrew/Library/Android/sdk
• Platform android-34, build-tools 35.0.0
• Java binary at: /Users/andrew/Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.0
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Users/andrew/Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.92.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.114
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,f: material design,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.25 | low | Minor |
2,505,368,548 | PowerToys | PowerToys Workspaces: a shortcut per workspace. | ### Description of the new feature / enhancement
It would be useful to be able to have a specified shortcut for each Workspace.
For example, by pressing Windows+Ctrl+1 my 1st workspace launches directly, by pressing Windows+Ctrl+2 my 2nd workspace launches directly (by first closing all apps opened by any previous workspace?), etc.
### Scenario when this would be used?
This is very practical since I can go directly from powering on my system to launching my workspace with the press of a single shortcut. Now that I think of it, along with the above, why not have the option of launching a specific Workspace on startup?
### Supporting information
_No response_ | Needs-Triage,Needs-Team-Response,Product-Workspaces | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.