id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,517,505,257 | godot | Materials on instances using viewport textures do not properly interpret nodepaths | ### Tested versions
Reproducible in 4.3stable.
### System information
MacBook Air M1
### Issue description
When adding a ViewportTexture to a material on an instanced mesh scene, the ViewportTexture tries to interpret the NodePath to the SubViewport as a local path:
(in the scope of itself as the root, ie if the SV is a direct child of the mesh the path would simply be "SubViewport")
when the path is instead generated as a global/absolute path:
<img width="516" alt="Screenshot 2024-09-10 at 3 14 53 PM" src="https://github.com/user-attachments/assets/def3b998-2156-4a82-8fb1-3a91a1e7a206">
(in the scope of the scene owner, ie if the SV is a direct child of the mesh the path would be "MeshNode/SubViewport").
This can be temporarily fixed by manually setting the path, but any changes are reset upon scene reload.
When done with a mesh originally created in a scene, the same process occurs, but functions correctly, therefore my best guess is that a scene being instanced treats nodepaths as local by default.
Apologies if this is a duplicate or difficult to understand, this is my first issue. :D
### Steps to reproduce
1. Create a separate scene with a meshinstance3D
2. Instance that to another scene
3. Add a SubViewport child to that instance
4. Add a new StandardMaterial3D, make local to scene, add Viewport texture
5. Set subviewport as viewport in viewport texture
6. Reload scene
### Minimal reproduction project (MRP)
[MRP2.zip](https://github.com/user-attachments/files/16949879/MRP2.zip)
| bug,topic:editor,confirmed | low | Minor |
2,517,513,058 | godot | [3.x] Empty `resource_path` when loading with `ResourceLoader.load()` and `no_cache=true` | ### Tested versions
Tested with 3.5.3.stable, 3.6.stable
### System information
Fedora 40, Godot 3.6, GLES3
### Issue description
When loading a saved resource with `ResourceLoader.load()` and `no_cache=true` when there is already a cached version of the resource available, the returned loaded resource does not have a `resource_path` (it is an empty string). The `resource_path` is correct when using `ResourceLoader.load()` with `no_cache=false`
In my game I am printing some loaded resource paths for logging purposes. In Godot 3.5.3 this worked fine, but after upgrading to Godot 3.6 it stopped working. In the process of making an MRP, I found that this issue seems to happen in Godot 3.5.3 as well, but somehow is not affecting my project (not sure why).
This might be intended functionality, since a new resource is being created with a different ID than the cached version. However it seems counterintuitive to me that a freshly loaded unmodified resource would not have a `resource_path` associated with it.
### Steps to reproduce
1. Load a resource from code using `load()` or `ResourceLoader.load(no_cache=true)` and print its `resource_path`, it should be correct.
2. Directly after this, load that resource from code using `ResourceLoader.load(no_cache=true)`, its `resource_path` will be an empty string.
### Minimal reproduction project (MRP)
[ResourcePathTestMRP.zip](https://github.com/user-attachments/files/16949790/ResourcePathTestMRP.zip)
| documentation | low | Minor |
2,517,536,198 | kubernetes | Idea: Pod-level probes or exclude some containers from pod readiness | I swear we have discussed this before but I cannot find it.
Today, probes are per-container. This makes sense in a lot of ways - if the specific container is failing liveness, you usually want to restart that specific container. Also, we know that a large majority of pods run with a single container, so this has rarely been a major issue.
That said, I think there are cases where it's imperfect. This came up in a user issue a few weeks ago and I meant to ping the old issue (which I cannot find), so I am opening this to discuss.
The specific case in question was a pod with multiple containers - one main app container and a small number of background-helper containers (think logs/metrics). The user had configured readiness probe for the "main" app and it was stable, but one of the background helpers was crashy. It had triggered crashloop-backoff and was therefore not-ready. This makes the whole pod not-ready. This was surprising to the user. When I looked at it from their POV, I kind of agreed. Why isn't there a way to express "It's OK for the helper to crash, as long as my main app is still serving"?
So the idea here is pod-level probes. At least ReadinessProbe makes sense, and I think one could argue about Liveness. Startup is somewhere between. Obviously (to me, anyway?) exec probes present a problem, but network probes seem OK.
An alternate approach could be a way toi express "this container should not be considered for pod readiness" or something.
I open this for discussion.
/cc @tallclair @SergeyKanzhelev @tssurya @danwinship @aojea @lauralorenz | sig/network,sig/node,kind/feature,triage/accepted | medium | Critical |
2,517,602,266 | pytorch | Indexing Numpy array with Tensor Gives Unexpected Results | ### 🐛 Describe the bug
Indexing a numpy array by a tensor creates a result with unexpected shape.
``` python
import torch as t
import numpy as np
# works
a = t.ones((100, 512, 512))[t.tensor([1])]
print(a.shape)
# torch.Size([1, 512, 512])
# works
b = np.ones((100, 512, 512))[np.array([1])]
print(b.shape)
# (1, 512, 512)
# different shape
c = np.ones((100, 512, 512))[t.tensor([1])]
print(c.shape)
# (512, 512)
```
I also tested with Numpy 2.1.1 with the same result.
Let me know if this belongs in a Numpy issue instead.
### Versions
```
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] torch-cubic-spline-grids==0.0.9.dev4+gbf81755
[pip3] torchaudio==2.4.0
[pip3] torchinterp1d==1.1
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] Could not collect
```
cc @mruberry @rgommers | triaged,module: numpy | low | Critical |
2,517,673,513 | deno | Add lint error when spec test traverses up ancestor directories | Seems common that people are doing this, so let's lint for it and error.
For example:
```
{
"args": "run --config ../../testdata/example/deno.json test.js"
}
``` | tests | low | Critical |
2,517,686,796 | PowerToys | file names to clipboard | ### Description of the new feature / enhancement
In Windows file manager, select one or more files, right click send to clipboard as text. This could have options to include full path or not.
### Scenario when this would be used?
web developers wanting to add links to multiple files to css, etc.
### Supporting information
Shift + right click offers "Copy as path" but I would think you could add a lot more options, such as Copy as name, or copy as table, and then offer which columns to copy, such as file weight, modified date, etc. | Needs-Triage | low | Minor |
2,517,716,963 | TypeScript | Suggest @ts-expect-error as Quick Fix | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
When detecting TypeScript errors, the context menu currently provides `Ignore this error message` as "Quick Fix" option, inserting `// @ts-ignore` above the affected line. Would it be possible to suggest something like `Expect this error message` as an additional option, inserting `// @ts-expect-error` instead? | Suggestion,Awaiting More Feedback | low | Critical |
2,517,719,566 | godot | GDExtension: Classes without default constructor break reloading | ### Tested versions
4.3-stable and before
### System information
All systems
### Issue description
GDExtension allows classes to be registered without a default constructor. They can either have custom constructors (static methods) that are invokable from GDScript, or non-exposed constructors only available in the binding language. This can be done by not providing `create` and `recreate` function pointers during class registration, and optionally marking a class as `abstract`.
However, without a `recreate` callback, Godot emits the error message:
> Extension marked as reloadable, but attempted to register class 'MyClass' which doesn't support reloading. Perhaps your language binding don't support it? Reloading disabled for this extension.
This might be a bit too much because:
- Not having a `recreate` callback shouldn't be a problem until there are actual instances of such a class to be reloaded.
- Even then, Godot shouldn't disable reloading for the entire extension just because of one class.
The only workaround seems to be providing a dummy `recreate` function, although I haven't tested this, and I don't know if it has any other implications.
### Steps to reproduce
Set `reloadable = true` in the `.gdextension` file.
Register a class in GDExtension with `create_instance_func` and `recreate_instance_func` function pointers set to null.
### Minimal reproduction project (MRP)
N/A | bug,topic:gdextension | low | Critical |
2,517,743,091 | TypeScript | @import suggestions for auto-imports in VS Code | ### 🔍 Search Terms
`@import`, auto-import, JSDoc
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Taken from a chat with @andrewbranch:
When working a JS codebase, it's common to want to import types. As of [TS 5.5 we have the JSDoc `@import` tag](https://devblogs.microsoft.com/typescript/announcing-typescript-5-5/#the-jsdoc-import-tag) which is wonderful!
However, VS Code does not offer it as an option for auto imports. Consider the following help menu:

I like the `Change 'Customer' to 'import...` suggestion, what I'd like even more would be a suggestion to `Add /** @import {Customer} from ...`
### 📃 Motivating Example
It doesn't improve the language. It improves the developer experience of JSDoc.
### 💻 Use Cases
1. What do you want to use this for?
Working in VS Code more effectively.
2. What shortcomings exist with current approaches?
Current suggestions are more verbose.
3. What workarounds are you using in the meantime?
Writing imports by hand.
| Suggestion,Awaiting More Feedback | low | Minor |
2,517,756,858 | PowerToys | Incomplete uninstall of the old version when updating Powertoys | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
Since a few months every update by PowerToys itself is leaving behind countless registry entries of the previous version (nearly 1000!) in the key "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components". In my case the current version is 0.84.0 with the product number "40B68059F2C87FE419340ACCB9C61E59". If I want to raise to version 0.84.1, I will find most likely the mentioned number 40B68059F2C87FE419340ACCB9C61E59 of the previous edition in the reg key "Components" again as several times before.
### ✔️ Expected Behavior
The registry entries of the previous version in the key "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Products" should disappear as it happened earlier approximately until version 0.80.0.
### ❌ Actual Behavior
Each time I manually deleted the outdated entries.
### Other Software
_No response_
```[tasklist]
### Tasks
```
| Issue-Bug,Needs-Triage | low | Minor |
2,517,776,118 | flutter | Incorrect use of parent widget error persists even when [Expanded] flex is 0 | ### Steps to reproduce
Wrap any widget (e.g. Text) with `Expanded` and provide a flex 0. Make sure to make the parent widget a non-flex widget which do not support `Expanded` widgets as a child.
I am encountering this a lot in my flutter web project as I reuse widgets across different viewports so I use `Wrap` or `Row` depending on whether I need to expand the child widget to take the whole screen width.
I know I can avoid the error I am encountering by putting a condition like if(width < 480) do not wrap with `Expanded` at all but rather I am doing it like this in order to keep the code readable and not repetitive like `flex: width < 480? 0: 1`.
### Expected results
Even though the flex is 0 and the `Expanded` widget is technically doing nothing (I assume), the engine expects the parent widget to be a flex widget - `Column` `Row`.
### Actual results
The logged error is:
```
Incorrect use of ParentDataWidget.
The ParentDataWidget Expanded(flex: 0) wants to apply ParentData of type FlexParentData to a
RenderObject, which has been set up to accept ParentData of incompatible type WrapParentData.
Usually, this means that the Expanded widget has the wrong ancestor RenderObjectWidget. Typically,
Expanded widgets are placed directly inside Flex widgets.
The offending Expanded is currently placed inside a Wrap widget.
```
### Code sample
<details open><summary>Code sample</summary>
```dart
Wrap(
children: [
Expanded(
flex: 0,
child: Text('The expanded widget in this case is doing nothing but flutter still expects a flex parent widget'),
),
],
),
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Incorrect use of ParentDataWidget.
The ParentDataWidget Expanded(flex: 0) wants to apply ParentData of type FlexParentData to a
RenderObject, which has been set up to accept ParentData of incompatible type WrapParentData.
Usually, this means that the Expanded widget has the wrong ancestor RenderObjectWidget. Typically,
Expanded widgets are placed directly inside Flex widgets.
The offending Expanded is currently placed inside a Wrap widget.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.0, on macOS 15.0 24A5331b darwin-arm64, locale en-LB)
• Flutter version 3.24.0 on channel stable at /Users/hassico/Desktop/Development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 80c2e84975 (6 weeks ago), 2024-07-30 23:06:49 +0700
• Engine revision b8800d88be
• Dart version 3.5.0
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0)
• Android SDK at /Users/hassico/Library/Android/sdk
• Platform android-34, build-tools 33.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.93.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0 24A5331b darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0 24A5331b darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.120
! Error: Browsing on the local area network for In English Please Royal Academy . Ensure the device is unlocked and attached with a cable or
associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,c: proposal,a: error message,P3,team-framework,triaged-framework | low | Critical |
2,517,833,484 | PowerToys | Workspaces: Unreal Editor not detected | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
1. Open Unreal Editor
2. Open Workspace Editor
3. Create New Worksapce
### ✔️ Expected Behavior
Workspace creation detects any/all Unreal Editor windows/instances
### ❌ Actual Behavior
No windows/instances detected
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,517,864,242 | godot | `Node2D.rotation == NAN` causes issues with 2D editor | ### Tested versions
- Reproducible in v4.3.stable
### System information
Windows 7, Godot v4.3.stable
### Issue description
When `Node2D.rotation == NAN` in editor, the node will not be displayed anywhere, and if you will select this node and press "Center view button", 2D editor will start working wrong. It won't display any of UI elements(like origin, viewport) and rulers will have no numbers on them(like on image below).

### Steps to reproduce
1. In the scene tree dock, select the node2d(or any other 2d node) with `rotation == NAN`.
2. Press **"Center view"** button.
### Minimal reproduction project (MRP)
MRP includes Readme.txt to help you to reproduce the bug.
[github-nan-rotation-reproduction-project.zip](https://github.com/user-attachments/files/16954949/github-nan-rotation-reproduction-project.zip) | bug,topic:editor,topic:2d | low | Critical |
2,517,957,285 | vscode | Build failed: `Extension host with pid #### exited with code: 0, signal: unknown.` | Build: https://dev.azure.com/monacotools/a6d41577-0fa3-498e-af22-257312ff0545/_build/results?buildId=292441
Changes: https://github.com/Microsoft/vscode/compare/3442819...ff86643
numerous build failures all with matching failure logs.
| vscode-build | low | Critical |
2,518,081,149 | TypeScript | React's ComponentProps type issues in TypeScript 5.6.2 | ### 🔎 Search Terms
react, forwardRef, component props, ComponentProps
### 🕗 Version & Regression Information
- This changed between versions 5.5.4 and 5.6.2 - but in version 5.5.4 there are other related bugs 🤷♂️
### ⏯ Playground Link
https://www.typescriptlang.org/play/?jsx=4&ts=5.6.2#code/JYWwDg9gTgLgBAbwLACg5xgTzAUzgUQBscQcA7GAFWxwBpV0tc4BhCcCM8mABSgjABnemgw1W7SFwp8BggOrAYACwgBXGACUcAMxGNxbDtKo19cHdADuAQygATbTrg3BcAEaucAMWt3HuuZMeL5Qtg5O2mT2OFDeamQAxjDAnEHiTiIAvhb8IHAA5FA4NskFANyoqDoJyalkFn4RugA8lLRwPAB8ABQMcImSnNwAXHA9YPxCYzwdxTpjTm1dAJRwALxdcNqlMAB0O8kAchAxIitjE1OCM3AAZIjzAPyLrZRdWWub2yXJB78wE4xRD9YowNRQBqeQQ+JoBHR9UToQbGbguNwJADWZAgVgarjgoXC8KiMTitRSnBaWJxeI6NNxZC65hWlRQWSqKGCElRFEUKiJ-icsiEbTgOAAHjByPY3EQSNxqLgtut+kYpNwRQolKoNEt3vcQUiBoRXIIjjZSC84IIYFBgGQAOZs9DoVzWyhwAA+cASMR0Dpw9hdcCybNQgzItp5Gr5OsFzWc60aYSFukR6DFkul0TlxFIFCVeGTBUEYBsZAKvX66EmcjG6uGcYFcOF12W5nQ81eOhaAAlKABZAAy8oLMGZ-S+W2QxrBEIaZDUhEIIay53DXPElBwtuWGzgnuzMrcCBRsZg1odOliMabMA5oied5M4qlJ5fipoLQrmC6NbgZ9GxMLUWmA7h-2NMYuAAN1ifpoJwOCoE3bltEEZcYAARgPcCZHbYIIGcPCYH5ZQE3hLpyjgAB6GiMGUPBigwwh4GANwrBwYAHGoqxlEwOAAAMAHkQCUH8yEwDoil0KtBKeVA0N3TCACYDx3PczyGEwxkI4jtO4MiKKcLIqNo+iVCY5TWLgdiBmgYpkjGc971LOBoGAR0HRsQg4DrIQ4AAajgeY-KmPZUCAA
### 💻 Code
I noticed that when using custom forwardRef function (I use it so my generic components are typed correctly), I get different results in these two situations:
```ts
import {
type ElementType,
type ComponentProps,
type ComponentPropsWithoutRef,
type ComponentType,
forwardRef as baseForwardRef,
type ForwardRefRenderFunction,
type Ref,
} from 'react';
// setup code
function forwardRef<T, P>(
component: (props: P, ref: Ref<T>) => React.ReactNode,
): (props: P & {ref?: Ref<T>}) => React.ReactNode {
return baseForwardRef(
component as unknown as ForwardRefRenderFunction<unknown, unknown>,
);
}
type FooProps<T extends ElementType> =
ComponentPropsWithoutRef<T> & {
className?: string;
as?: T | undefined;
};
const Foo = forwardRef(
<T extends ElementType = 'span'>(
props: FooProps<T>,
ref: Ref<HTMLElement>,
) => {
return null;
},
);
type Test<T> = T extends infer Component
? Component extends ComponentType<any>
? ComponentProps<Component>
: never
: never;
type Result1 = ComponentProps<typeof Foo>; // the result is weird: why `Omit<any, 'ref'>`?
type Result2 = Test<typeof Foo>; // the result is correct: Foo's original props + ref prop.
import {
type ElementType,
type ComponentProps,
type ComponentPropsWithoutRef,
type ComponentType,
forwardRef as baseForwardRef,
type ForwardRefRenderFunction,
type Ref,
} from 'react';
function forwardRef<T, P>(
component: (props: P, ref: Ref<T>) => React.ReactNode,
): (props: P & {ref?: Ref<T>}) => React.ReactNode {
return baseForwardRef(
component as unknown as ForwardRefRenderFunction<unknown, unknown>,
);
}
type FooProps<T extends ElementType> =
ComponentPropsWithoutRef<T> & {
className?: string;
as?: T | undefined;
};
const Foo = forwardRef(
<T extends ElementType = 'span'>(
props: FooProps<T>,
ref: Ref<HTMLElement>,
) => {
return null;
},
);
type Test<T> = T extends infer Component
? Component extends ComponentType<any>
? ComponentProps<Component>
: never
: never;
// ⚠️ different results
type Result1 = ComponentProps<typeof Foo>; // the result is weird: why `Omit<any, 'ref'>`?
type Result2 = Test<typeof Foo>; // the result is correct: Foo's original props + ref prop.
```
### 🙁 Actual behavior
Type `Result1` is wrong:
```ts
type Result1 = Omit<any, "ref"> & {
className?: string;
as?: ElementType | undefined;
} & {
ref?: Ref<HTMLElement> | undefined;
}
```
and `Result2` is correct:
```ts
type Result2 = PropsWithoutRef<ComponentProps<T>> & {
className?: string;
as?: T | undefined;
} & {
ref?: Ref<HTMLElement> | undefined;
}
```
### 🙂 Expected behavior
Types `Result1` and `Result2` are same:
```ts
type Result = PropsWithoutRef<ComponentProps<T>> & {
className?: string;
as?: T | undefined;
} & {
ref?: Ref<HTMLElement> | undefined;
}
```
### Additional information about the issue
_No response_ | Bug | low | Critical |
2,518,122,312 | pytorch | Real tensor prop for bool cast on x.eq().any() call fails export | ### 🐛 Describe the bug
Minified repro for AOTInductor torchbench test (with real tensor prop wrapper on export call):
```
python benchmarks/dynamo/torchbench.py --accuracy --inference --bfloat16 --device cuda --export-aot-inductor --only cm3leon_generate
```
This toy example fails with real-tensor tracing mode. Surprisingly(?) it exports fine when the bool() cast is removed, even without real tensor prop:
```
class Foo(torch.nn.Module):
def forward(self, x):
return bool(x.eq(0.1).any())
model = Foo()
inputs = (torch.randn(64),)
with torch._functorch.config.patch(fake_tensor_propagate_real_tensors=True):
ep = export(model, inputs, strict=False)
```
Output:
```
I0910 15:58:55.387000 2040540 torch/fx/experimental/symbolic_shapes.py:3317] create_unbacked_symbool u0 [0, 1] (_subclasses/fake_impls.py:392 in local_scalar_dense)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] Data dependent variable 'u0' allocated at:
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/test/export/test_export.py", line 8136, in <module>
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] run_tests()
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/testing/_internal/common_utils.py", line 1273, in run_tests
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] unittest.main(argv=argv)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/main.py", line 101, in __init__
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] self.runTests()
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/main.py", line 271, in runTests
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] self.result = testRunner.run(self.test)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/runner.py", line 184, in run
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] test(result)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 84, in __call__
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return self.run(*args, **kwds)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 122, in run
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] test(result)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 84, in __call__
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return self.run(*args, **kwds)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 122, in run
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] test(result)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/case.py", line 650, in __call__
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return self.run(*args, **kwds)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/testing/_internal/common_utils.py", line 3112, in run
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] self._run_custom(
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/testing/_internal/common_utils.py", line 3084, in _run_custom
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] super_run(result=result)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/case.py", line 591, in run
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] self._callTestMethod(testMethod)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/home/pianpwk/.conda/envs/pytorch-3.10/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] method()
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/testing/_internal/common_utils.py", line 2979, in wrapper
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] method(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/test/export/test_export.py", line 1203, in test_crash_real_tensor_eq
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] ep = export(model, inputs, strict=False)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/export/__init__.py", line 273, in export
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return _export(
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 990, in wrapper
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] ep = fn(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/export/exported_program.py", line 114, in wrapper
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return fn(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1880, in _export
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] export_artifact = export_func( # type: ignore[operator]
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1683, in _non_strict_export
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] aten_export_artifact = _to_aten_func( # type: ignore[operator]
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 637, in _export_to_aten_ir
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] gm, graph_signature = transform(aot_export_module)(
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1611, in _aot_export_non_strict
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_functorch/aot_autograd.py", line 1246, in aot_export_module
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] fx_g, metadata, in_spec, out_spec = _aot_export_function(
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_functorch/aot_autograd.py", line 1480, in _aot_export_function
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] fx_g, meta = create_aot_dispatcher_function(
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return _create_aot_dispatcher_function(
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_functorch/aot_autograd.py", line 623, in _create_aot_dispatcher_function
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] fw_metadata = run_functionalized_fw_and_collect_metadata(
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 173, in inner
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] flat_f_outs = f(*flat_f_args)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_functorch/_aot_autograd/utils.py", line 182, in flat_fn
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] tree_out = fn(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 863, in functional_call
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] out = mod(*args[params_len:], **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return self._call_impl(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/nn/modules/module.py", line 1747, in _call_impl
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return forward_call(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1598, in forward
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] tree_out = self._export_root(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return self._call_impl(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/nn/modules/module.py", line 1747, in _call_impl
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return forward_call(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/test/export/test_export.py", line 1198, in forward
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return bool(x.eq(0.1).any())
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_export/non_strict_utils.py", line 520, in __torch_function__
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return func(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_subclasses/functional_tensor.py", line 534, in __torch_dispatch__
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] outs_unwrapped = func._op_dk(
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/utils/_stats.py", line 21, in wrapper
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return fn(*args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_subclasses/fake_tensor.py", line 1238, in __torch_dispatch__
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return self.dispatch(func, types, args, kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_subclasses/fake_tensor.py", line 1692, in dispatch
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return self._cached_dispatch_impl(func, types, args, kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_subclasses/fake_tensor.py", line 1348, in _cached_dispatch_impl
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] output = self._dispatch_impl(func, types, args, kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_subclasses/fake_tensor.py", line 1983, in _dispatch_impl
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] op_impl_out = op_impl(self, func, *args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_subclasses/fake_impls.py", line 147, in dispatch_to_op_implementations_dict
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return op_implementations_dict[func](fake_mode, func, *args, **kwargs)
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/_subclasses/fake_impls.py", line 392, in local_scalar_dense
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] r = fake_mode.shape_env.create_unbacked_symbool()
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] File "/data/users/pianpwk/pytorch/torch/fx/experimental/recording.py", line 262, in wrapper
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679] return retlog(fn(*args, **kwargs))
V0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:4679]
W0910 15:58:55.411000 2040540 torch/fx/experimental/symbolic_shapes.py:5124] failed during evaluate_expr(Eq(u0, 1), hint=None, size_oblivious=False, forcing_spec=False
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] failed while running evaluate_expr(*(Eq(u0, 1), None), **{'fx_node': False})
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] Traceback (most recent call last):
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] File "/data/users/pianpwk/pytorch/torch/fx/experimental/recording.py", line 262, in wrapper
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] return retlog(fn(*args, **kwargs))
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] File "/data/users/pianpwk/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5122, in evaluate_expr
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] return self._evaluate_expr(orig_expr, hint, fx_node, size_oblivious, forcing_spec=forcing_spec)
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] File "/data/users/pianpwk/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5238, in _evaluate_expr
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] raise self._make_data_dependent_error(
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(u0, 1) (unhinted: Eq(u0, 1)). (Size-like symbols: none)
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298]
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] Potential framework code culprit (scroll up for full backtrace):
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] File "/data/users/pianpwk/pytorch/torch/_export/non_strict_utils.py", line 520, in __torch_function__
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] return func(*args, **kwargs)
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298]
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] For more information, run with TORCH_LOGS="dynamic"
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298]
E0910 15:58:55.412000 2040540 torch/fx/experimental/recording.py:298] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
W0910 15:58:55.413000 2040540 torch/fx/experimental/symbolic_shapes.py:5554] Unable to find user code corresponding to {u0}
inductor []
E
======================================================================
ERROR: test_crash_real_tensor_eq (__main__.TestExport)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/data/users/pianpwk/pytorch/torch/testing/_internal/common_utils.py", line 2979, in wrapper
method(*args, **kwargs)
File "/data/users/pianpwk/pytorch/test/export/test_export.py", line 1203, in test_crash_real_tensor_eq
ep = export(model, inputs, strict=False)
File "/data/users/pianpwk/pytorch/torch/export/__init__.py", line 273, in export
return _export(
File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1017, in wrapper
raise e
File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 990, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/export/exported_program.py", line 114, in wrapper
return fn(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1880, in _export
export_artifact = export_func( # type: ignore[operator]
File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1683, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 637, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1611, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
File "/data/users/pianpwk/pytorch/torch/_functorch/aot_autograd.py", line 1246, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/data/users/pianpwk/pytorch/torch/_functorch/aot_autograd.py", line 1480, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/data/users/pianpwk/pytorch/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/data/users/pianpwk/pytorch/torch/_functorch/aot_autograd.py", line 623, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/data/users/pianpwk/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 173, in inner
flat_f_outs = f(*flat_f_args)
File "/data/users/pianpwk/pytorch/torch/_functorch/_aot_autograd/utils.py", line 182, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 863, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/data/users/pianpwk/pytorch/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1598, in forward
tree_out = self._export_root(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/pianpwk/pytorch/test/export/test_export.py", line 1198, in forward
return bool(x.eq(0.1).any())
File "/data/users/pianpwk/pytorch/torch/_export/non_strict_utils.py", line 520, in __torch_function__
return func(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/fx/experimental/sym_node.py", line 449, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
File "/data/users/pianpwk/pytorch/torch/fx/experimental/recording.py", line 262, in wrapper
return retlog(fn(*args, **kwargs))
File "/data/users/pianpwk/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5122, in evaluate_expr
return self._evaluate_expr(orig_expr, hint, fx_node, size_oblivious, forcing_spec=forcing_spec)
File "/data/users/pianpwk/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5238, in _evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(u0, 1) (unhinted: Eq(u0, 1)). (Size-like symbols: none)
Potential framework code culprit (scroll up for full backtrace):
File "/data/users/pianpwk/pytorch/torch/_export/non_strict_utils.py", line 520, in __torch_function__
return func(*args, **kwargs)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
The following call raised this error:
File "/data/users/pianpwk/pytorch/test/export/test_export.py", line 1198, in forward
return bool(x.eq(0.1).any())
To execute this test, run the following from the base repo dir:
python test/export/test_export.py TestExport.test_crash_real_tensor_eq
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1 test in 1.237s
FAILED (errors=1)
```
### Versions
same as https://github.com/pytorch/pytorch/issues/135618
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,module: dynamic shapes,oncall: export | low | Critical |
2,518,129,748 | flutter | Flaky issue bot should not override existing priority labels | The flaky issue bot should respect existing non-p0 issue labels put on by a human. The revolution has not yet happened, and humans are still in control. | team-infra,P2,triaged-infra | low | Minor |
2,518,144,137 | neovim | 'incsearch' and 'inccommand' highlight during cmdwin editing | ### Problem
While in the command line window, any text being searched does not highlight in the last buffer.
### Steps to reproduce
```
nvim --clean
:e <whatever file>
q/a<type something>
```
```
nvim --clean
:e <whatever file>
q?a<type something>
```
```
nvim --clean
:e <whatever file>
q:as/<type something>
```
### Expected behavior
The command line window should act like as search commands and the command line already do normally.
### Neovim version (nvim -v)
v0.9.5
### Vim (not Nvim) behaves the same?
Yes
### Operating system/version
NixOS 24.05
### Terminal name/version
wezterm 20240203-110809-5046fc22
### $TERM environment variable
xterm-256color
### Installation
nixpkgs | enhancement,cmdline-mode,inccommand | low | Minor |
2,518,168,782 | pytorch | MPS code contains references to undocumented APIs | ### 🐛 Describe the bug
https://github.com/pytorch/pytorch/pull/133430 introduced following call:
https://github.com/pytorch/pytorch/blob/e48ee2cf50d86d87ef7c7d0839267dbed4903ebf/aten/src/ATen/native/mps/operations/LinearAlgebra.mm#L602-L605
That is not documented on https://developer.apple.com/documentation/metalperformanceshaders/mpsmatrixmultiplication?language=objc
We should either added to the MPSGraphSequoiaOps.h or use a public API such as https://developer.apple.com/documentation/metalperformanceshaders/mpsmatrixmultiplication/2147848-encodetocommandbuffer
### Versions
nightly, 2.5.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @kulinseth @albanD @DenisVieriu97 @jhavukainen | high priority,module: build,triaged,module: mps | low | Critical |
2,518,208,728 | pytorch | Issue UserWarning for missing ModuleList/ModuleDict in Module classes | ### 🚀 The feature, motivation and pitch
Twice in four months I've lost hours of my life debugging a missing ModuleList. I don't know Pytorch internals, but it seems like it should be pretty cheap/easy/fruitful to detect this particular footgun.
If a Module class has an iterable whose contents are themselves a Module, but that iterable isn't a ModuleList/ModuleDict, a little warning that this is probably a bug would go a long way.
I don't know pytorch internals, but I've been a python/ML focused SWE for a long time and would happily spend some time trying to implement this if given a bit of direction. Perhaps checking this on the first call to forward() would work?
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet | module: nn,module: error checking,module: molly-guard,triaged | low | Critical |
2,518,230,636 | go | runtime: "traceback did not unwind completely" during preempt during strings_test init | ```
#!watchflakes
default <- pkg == "strings" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8737148726636235345)):
runtime: g1: frame.sp=0x40001a7e10 top=0x40001c7fd0
stack=[0x4000188000-0x40001c8000
fatal error: traceback did not unwind completely
runtime stack:
runtime.throw({0x1abac7?, 0x4000187d48?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:1069 +0x38 fp=0x4000187ba0 sp=0x4000187b70 pc=0x7d588
runtime.(*unwinder).finishInternal(0x4000187c08?)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/traceback.go:566 +0x110 fp=0x4000187be0 sp=0x4000187ba0 pc=0x6b3d0
runtime.(*unwinder).next(0x4000187cc0?)
...
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/proc.go:435 +0xc8 fp=0x400004f710 sp=0x400004f6f0 pc=0x7d6a8
runtime.gcBgMarkWorker(0x400007e000)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mgc.go:1362 +0xdc fp=0x400004f7b0 sp=0x400004f710 pc=0x2713c
runtime.gcBgMarkStartWorkers.gowrap1()
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mgc.go:1278 +0x28 fp=0x400004f7d0 sp=0x400004f7b0 pc=0x27028
runtime.goexit({})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_arm64.s:1223 +0x4 fp=0x400004f7d0 sp=0x400004f7d0 pc=0x84314
created by runtime.gcBgMarkStartWorkers in goroutine 1
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mgc.go:1278 +0x140
FAIL strings 1.150s
— [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,OS-NetBSD,NeedsInvestigation,compiler/runtime | low | Critical |
2,518,282,518 | vscode | Chat Participants don't work in Inline-chat | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Not sure if this is a bug or a feature that hasn't been created yet. This recent discussion post describes this as well -
https://github.com/orgs/community/discussions/138334
I've taken a look at the history, which seems to show that its fixed, but it still doesn't work as of the most recent version of vscode
- [possible pr](https://github.com/microsoft/vscode/pull/209664)
- [closed issue](https://github.com/microsoft/vscode/issues/208898)
- [another closed issue](https://github.com/microsoft/vscode/issues/210804)
hard to piece together exactly what's going on, but would love some clarity on this. thanks so much!
| feature-request,api,api-proposal,inline-chat | low | Critical |
2,518,333,965 | godot | const local variables show as null in the debugger | ### Tested versions
Reproducible in: 4.4.dev2, 4.3 stable, 4.2.2 stable, 4.1.4 stable, 4.0.4 stable
Not reproducible in 3.6 stable because of error: `Error parsing expression, misplaced: const`
### System information
Godot v4.4.dev2.mono - Linux Mint 21.3 (Virginia) - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 6800 (RADV NAVI21) - AMD Ryzen 5 7600 6-Core Processor (12 Threads)
### Issue description
const local variables display as null in the debugger. I expected to see the actual value of the const, like you can see for consts that are declared as fields in the script.
### Steps to reproduce
This issue can be easily encountered with the following code:
```gdscript
extends Node
const my_field_const := 100
func _ready() -> void:
const my_const := 10
print("my_const is: " + str(my_const))
print("The above statement printed 10, but my_const shows as null in the debugger!")
```
Placing a breakpoint on one of the print lines and looking in the debugger will show the `my_field_const` const displayed correctly under the "Constants" heading in the inspector window, and the local `my_const` const displayed as null under the "Locals" heading in the stack trace window.
### Minimal reproduction project (MRP)
[const-are-null-in-debugger.zip](https://github.com/user-attachments/files/16957264/const-are-null-in-debugger.zip)
| bug,topic:gdscript,topic:editor | low | Critical |
2,518,366,176 | kubernetes | Handling WebSocket requests through the API server, the server received two connections | ### What happened?

第一次连接h.UpgradeTransport.WrapRequest(req),
第二次连接dial(updatedReq, h.UpgradeTransport)
### What did you expect to happen?
only one conn
### How can we reproduce it (as minimally and precisely as possible)?
...
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,triage/needs-information,triage/accepted | medium | Major |
2,518,377,211 | transformers | The examples in the examples directory are mostly for fine-tuning pre-trained models?how to trian from scratch | ### Model description
no
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | New model | low | Minor |
2,518,383,052 | ollama | Llama 3.1 70b 128k context not fitting 96Gb | ### What is the issue?
Not only it doesn't fit 96Gb (offloading only 10 layers out of 81), but processing actual ~128k request crashes with `CUDA error: out of memory` on 160Gb (will all layers offloaded)
As mentioned here https://github.com/ollama/ollama/issues/6279#issuecomment-2342546437_
this is obviously a bug
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10 | bug,nvidia,memory | low | Critical |
2,518,398,182 | pytorch | Setting a `float` value to `in_channels`, `out_channels` and `kernel_size` argument of `nn.Conv1d()` gets indirect errors and in_channels and kernel_size with with non-bools | ### 🐛 Describe the bug
Setting a `float` value to `in_channels`, `out_channels` and `kernel_size` argument of [nn.Conv1d()](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) gets the indirect errors as shown below:
```python
import torch
from torch import nn
# ↓↓
conv1d = nn.Conv1d(in_channels=1., out_channels=3, kernel_size=(True,)) # Error
```
> TypeError: empty(): argument 'size' failed to unpack the object at pos 2 with error "type must be tuple of ints,but got float"
```python
import torch
from torch import nn
# ↓↓
conv1d = nn.Conv1d(in_channels=1, out_channels=3., kernel_size=(True,)) # Error
```
```
TypeError: empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (tuple of ints size, *, torch.memory_format memory_format = None, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
```
```python
import torch
from torch import nn
# ↓↓
conv1d = nn.Conv1d(in_channels=1, out_channels=3, kernel_size=1.) # Error
```
> TypeError: empty(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got complex"
So, the errors should be something like those directly as shown below:
> `in_channels` must be `int`.
> `out_channels` must be `int`.
> `kernel_size` must be `int` or `tuple` or `list` of `int`.
### Versions
```python
import torch
torch.__version__ # 2.4.0+cu121
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet | module: nn,module: error checking,triaged | low | Critical |
2,518,489,689 | node | Suggestion with cluster module when use "maxConnection" in its worker process | ### What is the problem this feature will solve?
When the node service is under a very high load, multiple connections are processed at the same time in one worker ( we use the cluster module currently in our project ). We set the "maxConnections" to limit the connections of the worker. But we found that if a new request reach the limit of the "maxConnections", the request will retry on other workers. I think can we have an option, if a new request reach the limit, we can just drop the request instead of retrying the request on other workers ? Because as the system is under a very high load, the other workers may also be very busy at this moment. Here is a example on "**v22.7.0**".
```javascript
const cluster = require('cluster');
const http = require('http');
const process = require('process');
if (cluster.isPrimary) {
console.log(`Master ${process.pid} is running.\n`);
for (let i = 0; i < 1; i++) {
cluster.fork();
}
} else {
const server = http.createServer((req, res) => {
res.writeHead(200);
res.end('hello world\n');
});
server.maxConnections = 0;
server.listen(8000, () => {
console.log(`Worker ${process.pid} started`);
});
}
```
### What is the feature you are proposing to solve the problem?
For example, add An option "--maxconnections-drop-request" to the node "Command-line options" while on start up.
### What alternatives have you considered?
_No response_ | cluster,feature request | low | Minor |
2,518,508,145 | opencv | [TFLITE IMPORTER] STRIDED_SLICE | ### Describe the feature and motivation
I am trying to load a yolov3 model in tflite format. When calling cv::dnn::readNetFromTFLite() i get error:
```
[ERROR:0@1.988] global tflite_importer.cpp:252 populateNet DNN/TFLite: Problem during import of operator [STRIDED_SLICE]:(model_12/tf.strided_slice/StridedSlice) (192/208). Exception: OpenCV(4.10.0-dev) /home/pf/Downloads/repos/opencv-4.x/modules/dnn/src/tflite/tflite_importer.cpp:246: error: (-213:The function/feature is not implemented) Unsupported operator type STRIDED_SLICE in function 'populateNet'
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.10.0-dev) /home/pf/Downloads/repos/opencv-4.x/modules/dnn/src/tflite/tflite_importer.cpp:246: error: (-213:The function/feature is not implemented) Unsupported operator type STRIDED_SLICE in function 'populateNet'
```
### Additional context
I am using opencv master. Unfortunately I can't drop the model weights as they are over 25MB.
| feature,category: dnn | low | Critical |
2,518,532,870 | terminal | [Windows Terminal]: Selected state of controls present in 'Find' is not visible properly in dark mode. | ### Windows Terminal version
1.22.2362.0
### Windows build number
27695.1000
### Other Software
**Test Environment:**
OS: Windows 11 Version Dev (OS Build 27695.1000)
App: Windows Terminal Preview
### Steps to reproduce
**Pre-requisite:**
Settings> Personalization> Colors> Choose your Mode> Dark Mode.
**Repro steps:**
1. Open the 'windows terminal' application.
2. Follow the above-mentioned pre-requisite step.
3. Now expand the dropdown control.
4. Navigate to any application like PowerShell and open it.
5. PowerShell editor window will open.
6. Now press 'Ctrl + Shift + F' to open 'Find'.
7. Now select some controls in find.
8. Observe the issue.
**User experience:**
The low visibility of selected controls in dark mode negatively affects users, especially those with visual impairments or users who rely on dark mode for visual comfort. This issue impairs usability by making it harder to confirm selected options.
**Attachment:**
[Usable - Selected state of controls present in find is not properly visible in dark mode.zip](https://github.com/user-attachments/files/16958299/Usable.-.Selected.state.of.controls.present.in.find.is.not.properly.visible.in.dark.mode.zip)
### Expected Behavior
The selected state of the controls within the "Find" control should be clearly visible, even in dark mode. Proper contrast and visual indicators should ensure that users can easily identify which options are selected.
### Actual Behavior
In dark mode, the selected state of controls within the "Find" control is not clearly visible. The lack of proper highlighting or contrast makes it difficult to distinguish which controls are selected. | Help Wanted,Issue-Bug,Area-Accessibility,Product-Terminal,HCL-E+D,A11yUsable,HCL-WindowsTerminal | low | Minor |
2,518,533,130 | opencv | [ONNX IMPORTER] Concat | ### Describe the feature and motivation
I am trying to load a yolov3 model in ONNX format. When calling cv::dnn::readNetFromONNX() i get error:
```
[ERROR:0@0.322] global onnx_importer.cpp:1036 handleNode DNN/ONNX: ERROR during processing node with 2 inputs and 1 outputs: [Concat]:(onnx_node!/fpn/Concat) from domain='ai.onnx'
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.10.0-dev) /home/pf/Downloads/repos/opencv-4.x/modules/dnn/src/onnx/onnx_importer.cpp:1058: error: (-2:Unspecified error) in function 'handleNode'
> Node [Concat@ai.onnx]:(onnx_node!/fpn/Concat) parse error: OpenCV(4.10.0-dev) /home/pf/Downloads/repos/opencv-4.x/modules/dnn/src/layers/concat_layer.cpp:108: error: (-201:Incorrect size of input array) Inconsistent shape for ConcatLayer in function 'getMemoryShapes'
>
```
The model works great with onnxruntime and supports dynamic shapes provided both height and width are divisible by 32 (max stride of yolov3 model).
I am using opencv master
### Additional context
_No response_ | feature,category: dnn (onnx) | low | Critical |
2,518,595,024 | PowerToys | Still program not detected by workspace | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
Launch Family Tree Heritage. Capture with Workspaces. Observe not captured.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
Program should be detected after bug fix
### Other Software
Family Tree Heritage Gold
FTH Version 16: Build 16.0.12
Powered by Ancestral Quest
Copyright (c) 1994-2023 Incline Software, LC
Published by Individual Software, Inc.
Copyright (c) 2002-2023 | Issue-Bug,Needs-Triage,Product-Workspaces | low | Critical |
2,518,664,752 | PowerToys | Error keyboard manager | ### Microsoft PowerToys version
0.84.1
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
The reassigned keys do not work after restarting the system. It is necessary to force open the settings section of the Keyboard Manager and press the on/off function. There has already been a similar problem. She was fixed and now she's back again.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | medium | Critical |
2,518,668,281 | tauri | [bug] Tauri 2.0.1-rc10 on Windows - Refused to execute inline script in isolation mode | ### Describe the bug
Running the latest RC of Tauri version 2 on Windows with isolation mode active results in an error like Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' 'nonce-16386336672808645265' 'sha256-Ptr6AdEVRnAzjWNLKfOsbSdF0wBZquJ8JC9yowI7oUU=' 'sha256-BimfdWigiGKqCgqYSKsBLTLCR4WMW0TQcnrCL0CsVCM=' 'sha256-Ptr6AdEVRnAzjWNLKfOsbSdF0wBZquJ8JC9yowI7oUU='". Either the 'unsafe-inline' keyword, a hash ('sha256-zTUpprM6DaX+a1WejnBsJRGhqeeHrm1DViGQwA5rHK8='), or a nonce ('nonce-...') is required to enable inline execution.
Somehow that only happens in isolation mode.
### Reproduction
https://github.com/inzanez/csp-issue
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
✔ WebView2: 128.0.2739.67
✔ MSVC:
- Visual Studio Build Tools 2022
- Visual Studio Professional 2017
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.17.0
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.0.0-rc.11
- tauri-build 🦀: 2.0.0-rc.10
- wry 🦀: 0.43.1
- tao 🦀: 0.30.0
- tauri-cli 🦀: 1.6.1
- @tauri-apps/api : 2.0.0-rc.4
- @tauri-apps/cli : 2.0.0-rc.13
[-] Plugins
- tauri-plugin-fs 🦀: 2.0.0-rc.3
- @tauri-apps/plugin-fs : 2.0.0-rc.2
- tauri-plugin-shell 🦀: 2.0.0-rc.3
- @tauri-apps/plugin-shell : 2.0.0-rc.1
- tauri-plugin-dialog 🦀: 2.0.0-rc.5
- @tauri-apps/plugin-dialog : 2.0.0-rc.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,518,779,454 | flutter | Ability to manually resize columns by dragging separator | ### Use case
On the web, we often have users would ask if they could resize the columns of our DataTables like in Google Sheets.
There is a workaround using DataTable described here: https://www.technicalfeeder.com/2023/01/flutter-resize-table-column-by-dragging/
There are also packages that provide their own implementations of tables:
- (commercial) https://pub.dev/packages/syncfusion_flutter_datagrid
- (MIT) https://pub.dev/packages/pluto_grid
### Proposal
It would be great if we could provide a callback to the DataColumn `onResize` which would have a signature similar to `void Function(double width)`.
If this callback is provided, then the user would be able to resize this column by dragging the border on the right of the column. The callback would then be fired with the new width of the column so that it can be saved for when the user comes back to the page. | c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Minor |
2,518,874,364 | pytorch | torch.cat and torch.stack does not invoke `__torch_function__` API | ### 🐛 Describe the bug
```python
# DanLing
# Copyright (C) 2022-Present DanLing
# This program is free software: you can redistribute it and/or modify
# it under the terms of the following licenses:
# - The Unlicense
# - GNU Affero General Public License v3.0 or later
# - GNU General Public License v2.0 or later
# - BSD 4-Clause "Original" or "Old" License
# - MIT License
# - Apache License 2.0
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the LICENSE file for more details.
# pylint: disable=protected-access
from __future__ import annotations
from typing import Iterable, Sequence, SupportsFloat
import torch
from torch import Tensor
from danling.tensors.functional import mask_tensor, pad_tensor
from danling.tensors.utils import TorchFuncRegistry
from danling.utils import method_cache
class NestedTensor:
__storage: Sequence[Tensor]
batch_first: bool = True
padding_value: SupportsFloat = 0.0
mask_value: bool = False
def __init__(
self,
*tensors: Iterable[Tensor],
batch_first: bool = True,
padding_value: SupportsFloat = 0.0,
mask_value: bool = False,
) -> None:
if len(tensors) == 1 and isinstance(tensors, Sequence):
tensors = tensors[0] # type: ignore
self._storage = tensors
self.batch_first = batch_first
self.padding_value = padding_value
self.mask_value = mask_value
@property
def _storage(self):
return self.__storage
@_storage.setter
def _storage(self, tensors: Sequence):
if not isinstance(tensors, Iterable):
raise ValueError(f"tensors must be an Iterable, bug got {type(tensors)}.")
tensors = list(tensors)
if len(tensors) == 0:
raise ValueError("tensors must be a non-empty Iterable.")
if not isinstance(tensors[0], Tensor):
tensors = [torch.tensor(t) for t in tensors]
self.__storage = tensors
def storage(self):
return self._storage
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
print("meow")
if kwargs is None:
kwargs = {}
if func not in NestedTensorFunc or not all(issubclass(t, (torch.Tensor, NestedTensor)) for t in types):
args = [a.tensor if hasattr(a, "tensor") else a for a in args]
return func(*args, **kwargs)
return NestedTensorFunc[func](*args, **kwargs)
@property
def tensor(self) -> Tensor:
return self._tensor(tuple(self._storage), self.batch_first, self.padding_value)
@property
def mask(self) -> Tensor:
return self._mask(tuple(self._storage), self.batch_first, self.mask_value)
def size(self, dim: int | None = None) -> torch.Size | int:
return self._size(tuple(self._storage), dim, self.batch_first)
@method_cache(maxsize=1)
def _tensor(self, storage: tuple, batch_first: bool, padding_value: SupportsFloat) -> Tensor:
if storage[0].dim() == 0:
return torch.stack(storage, dim=0)
return pad_tensor(storage, size=self.size(), batch_first=batch_first, padding_value=float(padding_value))
@method_cache(maxsize=1)
def _mask(self, storage: tuple, batch_first: bool, mask_value: bool) -> Tensor:
if storage[0].dim() == 0:
return torch.full((len(storage),), not mask_value, dtype=torch.bool, device=self.device)
size = self.size()
# ignore channel dimension
if storage[0].dim() > 1 and len({t.size(-1) for t in storage}) == 1:
size = size[:-1] # type: ignore
return mask_tensor(storage, size=size, batch_first=batch_first, mask_value=mask_value)
@method_cache(maxsize=1)
def _size(self, storage, dim: int | None = None, batch_first: bool = True) -> torch.Size | int:
if dim is not None:
if dim == 0:
return len(storage)
return max(t.size(dim - 1) for t in storage)
if max(t.dim() for t in storage) == 0:
return torch.Size((len(storage),))
ndim = max(t.dim() for t in storage)
size = [max(t.shape[i] if i < len(t.shape) else 0 for t in storage) for i in range(ndim)]
size.insert(0 if batch_first else 1, len(storage))
return torch.Size(size)
def __repr__(self):
return self.__class__.__name__ + repr(self.tensor)[len(self.tensor.__class__.__name__) :] # noqa: E203
NestedTensorFunc = TorchFuncRegistry()
@NestedTensorFunc.implement(torch.isin)
def isin(elements, test_elements, *, assume_unique: bool = False, invert: bool = False):
if isinstance(elements, NestedTensor):
elements = elements.tensor
if isinstance(test_elements, NestedTensor):
test_elements = test_elements.tensor
return torch.isin(elements, test_elements, assume_unique=assume_unique, invert=invert)
@NestedTensorFunc.implement(torch.log)
def log(tensor):
return NestedTensor([torch.log(t) for t in tensor._storage])
@NestedTensorFunc.implement(torch.mean)
def mean(
input,
dim: int | None = None,
keepdim: bool = False,
*,
dtype: torch.dtype | None = None,
):
return input.mean(dim=dim, keepdim=keepdim, dtype=dtype)
@NestedTensorFunc.implement(torch.sqrt)
def sqrt(tensor):
return NestedTensor([torch.sqrt(t) for t in tensor._storage])
@NestedTensorFunc.implement(torch.cat)
def cat(tensors, dim: int = 0):
print("abc")
if dim != 0:
raise NotImplementedError(f"NestedTensor only supports cat when dim=0, but got {dim}")
return NestedTensor([t for tensor in tensors for t in tensor._storage], tensors[0]._state)
@NestedTensorFunc.implement(torch.stack)
def stack(*args, **kwargs):
print("xyz")
raise NotImplementedError("NestedTensor does not support stack as of now")
```
```python
In [0]: nt = NestedTensor([[1, 2, 3], [4,5]])
In [1]: nt
Out[1]:
NestedTensor([[1, 2, 3],
[4, 5, 0]])
In [2]: torch.isin(nt, 2)
meow
Out[2]:
tensor([[False, True, False],
[False, False, False]])
In [3]: torch.cat(nt, 0)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[5], line 1
----> 1 torch.cat(nt, 0)
TypeError: cat() received an invalid combination of arguments - got (NestedTensor, int), but expected one of:
* (tuple of Tensors tensors, int dim = 0, *, Tensor out = None)
* (tuple of Tensors tensors, name dim, *, Tensor out = None)
```
### Versions
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.30.3
Libc version: N/A
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:51:49) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1
[pip3] torcheval==0.0.7
[pip3] torchmetrics==1.4.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.18.0
[pip3] torchvision==0.19.1
[conda] numpy 1.26.4 py310h3b2db8e_0 defaults
[conda] numpy-base 1.26.4 py310ha9811e2_0 defaults
[conda] torch 2.4.1 pypi_0 pypi
[conda] torchaudio 2.4.1 pypi_0 pypi
[conda] torcheval 0.0.7 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchtext 0.18.0 pypi_0 pypi
[conda] torchvision 0.19.1 pypi_0 pypi
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @hameerabbasi @rgommers @ezyang @albanD | triaged,module: nestedtensor,module: __torch_function__,tensor subclass | low | Critical |
2,518,936,110 | vscode | Azure DevOps - Enable seamless Azure DevOps repository cloning in VSCode similar to GitHub | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Hi,
I’m currently working on a remote Linux server via SSH using VSCode, and I’ve noticed a significant difference in the way VSCode handles cloning repositories from GitHub compared to Azure DevOps.
For GitHub repositories, VSCode handles the OAuth credential management seamlessly. When I clone a GitHub repository, it only asks for my credentials the first time, and I believe it’s VSCode managing this process, making it extremely convenient.
However, when I try to clone a repository from Azure DevOps, this seamless integration is missing. I have to resort to using tools like Git Credential Manager, configuring GPG keys, or setting up other complex solutions. While I have been able to get these methods working, they are cumbersome and add unnecessary complexity to the workflow.
I also tried the Azure Repos extension, but it’s not a good solution. It seems to clone the repository in an isolated environment where I can only edit code — no access to the console, no debugging capabilities, nothing beyond basic code editing. This severely limits its usefulness for any real development work.
Is there any way that VSCode can handle cloning Azure DevOps repositories in a similar way to GitHub? It would be fantastic if the OAuth process for Azure DevOps could be integrated into VSCode just like it is for GitHub, allowing for a smoother experience.
Thanks for considering this feature request!
| feature-request,git | low | Critical |
2,518,975,437 | ollama | Support Mistral's new visual model: Pixtral-12b-240910 | Mistral AI just dropped Pixtral, their 12b model with vision support.
- https://github.com/mistralai/mistral-common/releases/tag/v1.4.0
- https://www.reddit.com/r/LocalLLaMA/comments/1fe3x1z/mistral_dropping_a_new_magnet_link/ | model request | high | Critical |
2,519,005,838 | flutter | [flutter_markdown][Localization issues] padding is EdgeInsets instead of EdgeInsetsGeometry | ### Steps to reproduce
Package doesn't support `EdgeInsetsDirectional` passed for padding.
### Expected results
`EdgeInsetsDirectional` should be accepted for padding
### Actual results
Package doesn't support `EdgeInsetsDirectional` passed for padding.
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| a: internationalization,package,team-ecosystem,P2,p: flutter_markdown,triaged-ecosystem | low | Minor |
2,519,050,139 | flutter | [web] Icon tree shaking doesn't work well on web | Let's say we have the following example
```
% flutter create foo && cd foo
```
and modify `lib/main.dart`
```diff
import 'package:flutter/material.dart';
+ final IconData unusedGlobalVariable =
+ IconData(0xe411, fontFamily: 'MaterialIcons');
void main() {
runApp(const MyApp());
}
...
```
This added `unusedGlobalVariable` isn't used anywhere. So when compiling the program it should be tree shaken away. Any actual uses of `IconData` are in fact constant uses.
=> Tree shaking of icon fonts should therefore work.
This seems to work as expected when building a native app
```
% flutter build apk --release
Font asset "MaterialIcons-Regular.otf" was tree-shaken, reducing it from 1645184 to 1384 bytes (99.9% reduction). Tree-shaking can be disabled by providing the --no-tree-shake-icons flag when building your app.
Running Gradle task 'assembleRelease'... 46.1s
✓ Built build/app/outputs/flutter-apk/app-release.apk (18.9MB)
```
But when building a web build we get an error:
```
flutter build web --release
This application cannot tree shake icons fonts. It has non-constant instances of IconData at the following locations:
- file:///.../lib/main.dart:4:5
Target web_release_bundle failed: Error: Avoid non-constant invocations of IconData or try to build again with --no-tree-shake-icons.
#0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3)
#1 IconTreeShaker._findConstants (package:flutter_tools/src/build_system/targets/icon_tree_shaker.dart:321:7)
<asynchronous suspension>
#2 IconTreeShaker._getIconData (package:flutter_tools/src/build_system/targets/icon_tree_shaker.dart:110:45)
<asynchronous suspension>
#3 IconTreeShaker.subsetFont (package:flutter_tools/src/build_system/targets/icon_tree_shaker.dart:179:5)
<asynchronous suspension>
#4 copyAssets.<anonymous closure> (package:flutter_tools/src/build_system/targets/assets.dart:163:25)
<asynchronous suspension>
#5 Future.wait.<anonymous closure> (dart:async/future.dart:520:21)
<asynchronous suspension>
#6 copyAssets (package:flutter_tools/src/build_system/targets/assets.dart:129:3)
<asynchronous suspension>
#7 WebReleaseBundle.build (package:flutter_tools/src/build_system/targets/web.dart:475:29)
<asynchronous suspension>
...
#20 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:130:9)
<asynchronous suspension>
#21 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#22 main (package:flutter_tools/executable.dart:94:3)
<asynchronous suspension>
Compiling lib/main.dart for the Web... 30.2s
Error: Failed to compile application for the Web.
``` | tool,platform-web,a: build,has reproducible steps,P2,team-web,triaged-web,found in release: 3.24,found in release: 3.26 | low | Critical |
2,519,124,038 | PowerToys | PowerToys Workspaces: run as | ### Description of the new feature / enhancement
It would be great to add the possibility to start apps with an other user or be able to use the user command line to start apps ex:
C:\Windows\System32\runas.exe /savecred /user:toto "cmd /c start \"\" mmc %SystemRoot%\system32\dsa.msc"
### Scenario when this would be used?
This would be very practical to start multiple apps in a work environment where I have to use multiple users for different apps.
### Supporting information
_No response_ | Needs-Spec,Needs-Triage,Product-Workspaces | low | Minor |
2,519,130,460 | pytorch | `randint(max)` causes a graph break, but not `rand().mul(max).floor().to(torch.long)` (on CPU) | ### 🐛 Describe the bug
The following code causes a graph break (on CPU):
```python
def policy(obs, epsilon):
q_values = q_network_detach(obs)
actions = torch.argmax(q_values, dim=1)
actions_random = torch.rand(actions.shape, device=actions.device).mul(n_act).floor().to(torch.long)
# actions_random = torch.randint_like(actions, n_act)
use_policy = torch.rand(actions.shape, device=actions.device).gt(epsilon)
return torch.where(use_policy, actions, actions_random)
```
but not the same function with the first version of `action_random` uncommented (which supposedly does the same job)
The graph break reads:
```
W0911 10:15:45.971000 28832 torch/_dynamo/exc.py:284] [0/0] Backend compiler failed with a fake tensor exception at
W0911 10:15:45.971000 28832 torch/_dynamo/exc.py:284] [0/0] File "/Users/vmoens/Repos/rl/leanrl/leanrl/dqn_torchcompile.py", line 208, in policy
W0911 10:15:45.971000 28832 torch/_dynamo/exc.py:284] [0/0] return torch.where(use_policy, actions, actions_random)
W0911 10:15:45.971000 28832 torch/_dynamo/exc.py:284] [0/0] Adding a graph break.
[...]
File "/Users/vmoens/venv/rl/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 147, in dispatch_to_op_implementations_dict
return op_implementations_dict[func](fake_mode, func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/vmoens/venv/rl/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 386, in local_scalar_dense
raise DataDependentOutputException(func)
torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default
[...]
torch._dynamo.exc.Unsupported: Backend compiler failed with a fake tensor exception at
File "/Users/vmoens/Repos/rl/leanrl/leanrl/dqn_torchcompile.py", line 208, in policy
return torch.where(use_policy, actions, actions_random)
Adding a graph break.
```
### Versions
nightly
cc @ezyang @chauhang @penguinwu @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @zou3519 @bdhirsh | triaged,oncall: pt2,module: fakeTensor,module: dynamo,module: pt2-dispatcher | low | Critical |
2,519,250,325 | vscode | Editor Rulers do not account for Tab display size | Summary:
When working in Visual Studio Code (VS Code), the editor rulers are visually misaligned with code indentation when using tabs. The rulers assume that every tab is displayed as a fixed number of spaces, but this does not dynamically account for the display width set for tabs. As a result, when tabs are displayed at a size different from the default, the rulers become inaccurate.
Expected Behavior:
The rulers should adjust according to the tab display size. When the tab width is set to 2 spaces, rulers should account for each tab character as occupying 2 columns in the editor, ensuring that the visual representation of the code matches the ruler guide.
Actual Behavior:
The rulers treat each tab as a fixed number of spaces, regardless of the user-configured tab width. This leads to visual misalignment between the ruler and the code when working with non-standard tab sizes. | bug,editor-contrib | low | Minor |
2,519,266,478 | flutter | [web] Dual compile web apps should be considered two separate apps with separate assets/resources | Generally flutter apps can have backend-specific code in them: They do something else on web vs non-web, use different plugins on iOS vs Android, etc. That also means the resources needed will vary depending on which backend, operating system, .... the application is built for.
When compiling to the web, flutter now makes a dual compile of the app: A compilation with dart2js and a compilation with dart2wasm and choose at runtime which of the two to load and use.
=> App can decide to do something different when compiled to wasm vs non-wasm
=> The dart2js app and dart2wasm app may have different resources.
A very trivial example that showcases this problem is here:
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
const isWasm = !identical(1, 1.0);
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: Scaffold(
body: Center(
child: Icon(
isWasm ? Icons.fastfood : Icons.favorite,
color: Colors.blueGrey,
size: 30.0,
),
),
),
);
}
}
```
When serving the `build/web` directory *without* CORS http headers the dart2js app launches and it shows a nice heart icon.
When serving the `build/web` directory *with* CORS headers, the dart2wasm app launches and it fails to show an icon.
This probably comes from the fact that `build/web` doesn't have seperate assets/resources for the dart2js and dart2wasm app - instead they use the same resources (which are determined based on the dart2wasm build and e.g. it's tree shaken icons).
Even though differences may be small in most cases, conceptually the dart2js and dart2wasm apps can be completely different and can have very different resources they need.
=> The flutter web dual compile should treat them as separate applications with their own resources/assets.
/cc @yjbanov @eyebrowsoffire | platform-web,a: build,has reproducible steps,P3,e: wasm,team-web,triaged-web,found in release: 3.24,found in release: 3.26 | low | Minor |
2,519,270,577 | yt-dlp | Broken site: https://cu.ntv.co.jp/ | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Europe
### Provide a description that is worded well enough to be understood
Listed as supported site but is not working.
Examples:
https://cu.ntv.co.jp/eGG_20240820/
https://cu.ntv.co.jp/sekaiitadakigourmet_20240828/
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
>yt-dlp.exe -vU -F https://cu.ntv.co.jp/sekaiitadakigourmet_20240828/
[debug] Command-line config: ['-vU', '-F', 'https://cu.ntv.co.jp/sekaiitadakigourmet_20240828/']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.09.08.232909 from yt-dlp/yt-dlp-nightly-builds [d1c4d88b2] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1832 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.09.08.232909 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.09.08.232909 from yt-dlp/yt-dlp-nightly-builds)
[cu.ntv.co.jp] Extracting URL: https://cu.ntv.co.jp/sekaiitadakigourmet_20240828/
[cu.ntv.co.jp] sekaiitadakigourmet_20240828: Downloading webpage
ERROR: [cu.ntv.co.jp] sekaiitadakigourmet_20240828: Unable to extract __NUXT__; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 740, in extract
File "yt_dlp\extractor\ntvcojp.py", line 35, in _real_extract
File "yt_dlp\extractor\common.py", line 1768, in _search_nuxt_data
File "yt_dlp\extractor\common.py", line 1333, in _search_regex
```
| site-bug,triage | low | Critical |
2,519,295,603 | pytorch | [inductor][cpu]GPTNeoForSequenceClassification AMP single/multiple thread static/dynamic shape default/cpp accuracy failure | ### 🐛 Describe the bug
GPTNeoForSequenceClassification AMP single/multiple thread static/dynamic shape default/cpp accuracy failure in 2024-09-07 nightly release
```
loading model: 0it [00:21, ?it/s]cpu eval GPTNeoForSequenceClassification
WARNING:common:fp64 golden ref were not generated for GPTNeoForSequenceClassification. Setting accuracy check to cosine
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
W0910 22:42:04.897982 129817 torch/_dynamo/utils.py:1719] Similarity score=0.9893215894699097
E0910 22:42:04.898772 129817 torch/_dynamo/utils.py:1670] Accuracy failed for key name logits
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
fail_accuracy
dev,name,batch_size,accuracy,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,GPTNeoForSequenceClassification,1,fail_accuracy,1040,3,2,1,0,0,2
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>23512dbe</td>
<td>main</td>
<td>23512dbe</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>3bebc09be9845c0779f190489e8d4caa9e2653c8</td>
<td>main</td>
<td>c140fa1426603322a5a69ef91300f13489db5970</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.5.0a0+97ed7b3</td>
<td>main</td>
<td>2.5.0a0+97ed7b3</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob/main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference accuracy huggingface GPTNeoForSequenceClassification amp first static cpp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/52c7c89ea486f8124b740ad6d2ee055812e913ab
[huggingface-GPTNeoForSequenceClassification-inference-amp-dynamic-default-single-accuracy-crash_guilty_commit.log](https://github.com/user-attachments/files/16961940/huggingface-GPTNeoForSequenceClassification-inference-amp-dynamic-default-single-accuracy-crash_guilty_commit.log)
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @WeizhuoZhang-intel @chuanqi129 | oncall: pt2,module: inductor,oncall: cpu inductor | low | Critical |
2,519,325,624 | rust | Declarative macros: warn on unreachable macro rule | ### Code
```Rust
#[macro_export]
macro_rules! example {
($lit:expr) => {{
{}
}};
($lit:literal) => {{
unreachable!()
}};
}
fn main() {
example!(1)
}
```
### Current output
Builds without any warnings, and the resulting binary runs without a panic.
### Desired output
A warning telling me that the second macro rule is unreachable, because `expr` captures the same and more than `literal`.
### Rationale and extra context
I think it's probably impossible to solve this for the general case, but I think it's a very helpful warning for users new to declarative macros.
### Other cases
_No response_
### Rust Version
```Shell
rustc 1.83.0-nightly (0ee7cb5e3 2024-09-10)
binary: rustc
commit-hash: 0ee7cb5e3633502d9a90a85c3c367eccd59a0aba
commit-date: 2024-09-10
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
### Anything else?
_No response_ | A-lints,A-macros,T-compiler,C-feature-request | low | Minor |
2,519,333,373 | ui | [bug]: Next.js intall instructions missing tailwind step | ### Describe the bug
Try running the installation command in a next.js project where tailwind has not previously been used and it leads to an error. The installation instructions seem to be missing a step (i.e. set up tailwind). It doesn't say in the documentation that tailwind is a necessary dependency.
### Affected component/components
installation
### How to reproduce
1. Run `npx shadcn@latest init` in a next.js project where tailwind has not been previously used.
2. get the following error: `✖ Validating Tailwind CSS.`
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
$ npx shadcn@latest init
✔ Preflight checks.
✔ Verifying framework. Found Next.js.
✖ Validating Tailwind CSS.
✔ Validating import alias.
No Tailwind CSS configuration found at ...
It is likely you do not have Tailwind CSS installed or have an invalid configuration.
Install Tailwind CSS then try again.
Visit https://tailwindcss.com/docs/guides/nextjs to get started.
```
```
### System Info
```bash
next.js running on ubuntu.
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,519,349,696 | pytorch | [inductor][cpu] hf_BigBird float32 multiple threads performance regression in 2024-09-08 nightly release | ### 🐛 Describe the bug
<p>float32 dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>hf_BigBird</td>
<td>multiple</td>
<td>1</td>
<td>1.00978</td>
<td>0.168430035</td>
<td>0.1700772807423</td>
<td>51.089231</td>
<td>1</td>
<td>1.172052</td>
<td>0.144673477</td>
<td>0.16956483806480402</td>
<td>66.752075</td>
<td>0.86</td>
<td>1.0</td>
<td>0.86</td>
<td>1.31</td>
</tr>
</tbody>
</table>
<p>float32 static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>hf_BigBird</td>
<td>multiple</td>
<td>1</td>
<td>1.005213</td>
<td>0.161506732</td>
<td>0.16234866659391597</td>
<td>330.19123</td>
<td>1</td>
<td>1.182942</td>
<td>0.13820362600000002</td>
<td>0.16348687374769202</td>
<td>928.454535</td>
<td>0.85</td>
<td>1.01</td>
<td>0.86</td>
<td>2.81</td>
</tr>
</tbody>
</table>
<p>float32 dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>hf_BigBird</td>
<td>multiple</td>
<td>1</td>
<td>1.004326</td>
<td>0.161354734</td>
<td>0.162052754579284</td>
<td>335.387007</td>
<td>1</td>
<td>1.178599</td>
<td>0.137564144</td>
<td>0.162132962554256</td>
<td>943.997328</td>
<td>0.85</td>
<td>1.0</td>
<td>0.85</td>
<td>2.81</td>
</tr>
</tbody>
</table>
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>23512dbe</td>
<td>main</td>
<td>23512dbe</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>3bebc09be9845c0779f190489e8d4caa9e2653c8</td>
<td>main</td>
<td>c140fa1426603322a5a69ef91300f13489db5970</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.5.0a0+97ed7b3</td>
<td>main</td>
<td>2.5.0a0+97ed7b3</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob/main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench hf_BigBird float32 first static cpp
[torchbench-hf_BigBird-inference-float32-static-cpp-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/16962246/torchbench-hf_BigBird-inference-float32-static-cpp-multiple-performance-drop_guilty_commit.log)
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/95e976a63f0979a73ee2decb9092babeea55c13e
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @WeizhuoZhang-intel @chuanqi129 | oncall: pt2,module: inductor,oncall: cpu inductor | low | Critical |
2,519,352,963 | deno | Automatically handle sibling d.ts files when using `--unstable-sloppy-imports` with `deno publish` to JSR | `deno publish` should auto inject the relevant `/* @ts-self-types="./index.d.ts" */` comments. | feat,jsr | low | Minor |
2,519,368,031 | angular | add signal with object to [(ngModel)] does not trigger changes in toObservable | ### Which @angular/* package(s) are the source of the bug?
core, forms
### Is this a regression?
No
### Description
Adding non signal objects like number, string etc works just find. The problem occurs when using objects as signals and append them to ngModel.
The signal itself is updated, but the subscription from toObservable does not get triggered.
I think the best way to show what i mean is a simple stackblitz example
https://stackblitz.com/edit/jg6vgq?file=src%2Fexample%2Fform-field-overview-example.ts
The example shows a test1 with a simple string added to the signal and an object added to the signal
Both test 1 and test2 are added to an toObservable which tracks the changes
test1$ always gets triggered, but test2$ does not
The only way to trigger changes to test$ is to manually update the signal with set()
To see this working enable row 41
It would be nice if the test2$ would be updated just like test1$
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/jg6vgq?file=src%2Fexample%2Fform-field-overview-example.ts
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.2.3
Node: 22.3.0
Package Manager: yarn 1.22.17
OS: win32 x64
Angular: 18.2.3
... animations, cdk, cli, common, compiler, compiler-cli, core
... forms, language-service, material, material-date-fns-adapter
... platform-browser, platform-browser-dynamic, platform-server
... router, service-worker
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.3 (cli-only)
@angular-devkit/build-angular 18.2.3
@angular-devkit/core 18.2.3
@angular-devkit/schematics 18.2.3
@schematics/angular 18.2.3
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.3
```
### Anything else?
_No response_ | area: core,area: forms,cross-cutting: signals | low | Critical |
2,519,370,689 | tensorflow | MLIR quantizer produces asymmetric quantization for int16 activations | ### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 22.04
- TensorFlow installation (pip package or built from source): pip package
- TensorFlow library (version, if pip package or github SHA, if built from source): 2.17.0
### 2. Code
```
with open("calibrated.tflite", "rb") as model_file:
calibrated_model = model_file.read()
inference_type = convert.convert_inference_tf_type_to_tflite_type(tf.dtypes.int16)
tflite_model = convert.mlir_quantize(calibrated_model,
fully_quantize=True,
inference_type=inference_type,
input_data_type=tf.dtypes.int16,
output_data_type=tf.dtypes.int16)
with open("quantized.tflite", "wb") as model_file:
model_file.write(tflite_model)
```
Input file:
[calibrated.zip](https://github.com/user-attachments/files/16962415/calibrated.zip)
Output file:
[quantized.zip](https://github.com/user-attachments/files/16962422/quantized.zip)
### 3. Failure after conversion
The model cannot run on TFLite Micro with CMSIS-NN, as its fully connected kernel supports only symmetric quantization for int16 activations.
### 5. (optional) Any other info / logs
The quantizer already tries to take into account the int16 requirements here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/quantization/common/quantization_lib/quantization_utils.h#L238 and here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/quantization/common/quantization_lib/quantization_utils.h#L266
But it's not enough, as it should also set `narrow_range` before calling `quantfork::fakeQuantAttrsToType` here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/quantization/common/quantization_lib/quantization_utils.h#L248 and here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/quantization/common/quantization_lib/quantization_utils.h#L274
Otherwise the zero point might drift to -1, as it happens with the attached files.
| stat:contribution welcome,stat:awaiting tensorflower,comp:lite,TFLiteConverter,2.17 | low | Critical |
2,519,438,338 | vscode | Unable to open repository | Regardless the repository, I'm facing the following error when trying to open a file (it also cannot connect properly, so the file browser doesn't list the repo contents)

Version: 1.93.0
Commit: 4849ca9bdf9666755eb463db297b69e5385090e3
User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:130.0) Gecko/20100101 Firefox/130.0
Embedder: github.dev
<!-- generated by web issue reporter --> | info-needed | low | Critical |
2,519,440,808 | kubernetes | Invoke the aggregation service interface. The response headers contain duplicates | ### What happened?

When we access the aggregation service through the apiserver, the response of the request contains duplicate information, which is obviously unreasonable.
### What did you expect to happen?
There should be no duplicate fields
### How can we reproduce it (as minimally and precisely as possible)?
Accessing Aggregation Services Through API Server
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.28
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,triage/accepted | low | Minor |
2,519,453,577 | PowerToys | automatic deinstallation | ### Microsoft PowerToys version
v0.84
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
I cannot reproduce this bug, but maybe a developer can.
I just updated Windows and restarted the PC. I autostart some apps like PowerPoint, an Excel Sheet and PowerToys. The first thing I saw after the restart was the notification that a PowerToys update from v0.84 to v0.84.1 is available. I clicked it. The update window opened and the update started. After a few seconds a second update window opened, although I did not click anything again. It was not just a copy, the green progress bars were different. Then one window showed something like "uninstalling previous version". The progress bar of the second window suddenly went backwards. Next, both windows closed, leaving me with uninstalled PowerToys.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,519,524,123 | excalidraw | Request to whitelist url | please whitelist this url
https://hdpc.fa.us2.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CampusHiring/my-profile | Embeddable | low | Minor |
2,519,533,470 | godot | Evaluation of LTO configuration for all targets, and its impact on build time, build size, and performance | For years we've operated under the assumption that LTO (Link Time Optimization) is a net positive for production builds as it would:
- Increase performance (notably up to 20% in the GDScript VM with GCC LTO)
- Reduce build size
The drawback is much longer build times, hence why it's only used for production builds/official releases.
Now findings in #96785 suggest that the reduction in build size is only true for GCC's LTO, and not for LLVM LTO (whether "full" LTO `-flto` or ThinLTO `-flto=thin`). With LLVM LTO there's a significant size increase for platforms we tested so far (Web, Android, Linux) of up to +15%. For the Web (currently using LTO for official builds) and Android (not using it for now) this is significant.
So it's time we do a thorough review of build flags for all targets and compilers and make sure we're actually using the best configuration possible for official builds.
I'll post successive replies for each Godot target platform so we can use these posts (maintainers are welcome to edit my posts) to keep track of metrics and findings for each platform individually. If that turns out to be too unwieldy we can fork this issue in one issue per platform, but I expect we'll find closely related behavior across platforms who share a compiler toolchain (GCC, LLVM, MSVC).
@godotengine/buildsystem @godotengine/android @godotengine/ios @godotengine/linux-bsd @godotengine/macos @godotengine/web @godotengine/windows | enhancement,platform:windows,discussion,platform:linuxbsd,platform:web,platform:ios,platform:android,platform:macos,topic:buildsystem,performance | low | Major |
2,519,537,934 | PowerToys | Feature Request: Customize displayed shortcuts in PowerToys Shortcut Guide after remapping | ### Description of the new feature / enhancement
I’ve remapped some shortcuts using PowerToys' Keyboard Manager. However, the Shortcut Guide still displays the original Windows shortcuts. It would be great to have the ability to update or customize the shortcuts displayed in the Shortcut Guide based on remapped keys, to avoid confusion and improve the user experience.
### Scenario when this would be used?
This feature would be useful whenever shortcuts are remapped by a user, allowing the Shortcut Guide to reflect the user's new key configurations, which would prevent mistakes and increase efficiency. As a power user, I rely heavily on custom shortcuts to improve my workflow. Having them accurately reflected in the Shortcut Guide would ensure that my remapped shortcuts are easy to remember and use without conflicting with the default Windows shortcut information.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,519,594,795 | ollama | `image_url` support for vision models | ### What is the issue?
curl:
```py
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer OPENAI_API_KEY" \
-d '{
"model": "minicpm-v:8b-2.6-fp16",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What’s in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "http://images.cocodataset.org/val2017/000000039769.jpg"
}
}
]
}
],
"max_tokens": 300
}'
```
response:
```json
{
"error": {
"message": "invalid image input",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.10 | feature request,api | low | Critical |
2,519,781,004 | next.js | FluentUI popover does not work in Next@14.2.X, but it does work in next@14.1.X | ### Link to the code that reproduces this issue
https://stackblitz.com/edit/nextjs-bzkxez
### To Reproduce
Link to stackblitz with version next 14.2:
https://stackblitz.com/edit/nextjs-bzkxez
Link to stackblitz with version next 14.1:
https://stackblitz.com/edit/nextjs-5igbub
Both projects are completely same. They are clean next.js projects with added fluentui/react-components.
In next version 14.1.X fluentui popover does work in version 14.2.X it doesnt.
### Current vs. Expected behavior
Current behavior: Popover does not popoup as expected.
Expected behavior: Popover will popup.
### Provide environment information
```bash
node: 18.20.3,
npm: 10.2.3
Also tried with newest node version
node: 22.2.0,
yarn: 4.2.2
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
Other (Deployed)
### Additional context
Tried with yarn, pnpm, npm different versions. The only difference seems to be next version.
What I dont understand is that next app with fluentui must be used with dozens of people, yet i havent found any issue on this topic.
This is my first public issue, if I did something wrong Im ready for feedback. | bug | low | Minor |
2,519,794,059 | PowerToys | Feature Request: Add Search Functionality for Environment Variables | ### Description of the new feature / enhancement
I propose adding a search feature to the environment variables functionality. This feature would allow users to quickly search and locate specific environment variables by name or value within the interface.
### Scenario when this would be used?
This feature would be most beneficial when working with a large number of environment variables, where manually searching for a specific variable becomes time-consuming.
### Supporting information
The search functionality should enable users to type in part of an environment variable's name or value and filter the list of variables dynamically. It would make the user experience more efficient by allowing quick access to specific variables without manually scrolling through the entire list. | Needs-Triage | low | Minor |
2,519,813,605 | pytorch | Python's REPL trying to do invoke `.T` / `.mH` / `.mT` on 0-dim tensor when doing tab-completion for some reason (default install of Python on new Ubuntu 24.04) | ### 🐛 Describe the bug
Importing torch, defining a 0-dim tensor as `t = torch.tensor(1)`, typing `t.` and pressing `Tab` key (I was looking to check if tensors natively support `.pad(...)` instance function - I hope it would get promoted at some point from `torch.nn.functional.pad` to `torch.pad`/instance function :) ) produces the following completely unexpected warning. The syntax error part comes out when I hit `Enter` trying to escape the warning.
On the side note, is there any benefit for removing the warning and simply returning identity for 0-dim tensors for `.mH` / `.mT`/ `.T`?
```
Python 3.12.3 (main, Jul 31 2024, 17:43:48) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> t = torch.tensor(1)
>>> t./usr/lib/python3.12/rlcompleter.py:189: UserWarning: Tensor.mT is deprecated on 0-D tensors. This function is the identity in these cases. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3746.)
if (value := getattr(thisobject, word, None)) is not None:
/usr/lib/python3.12/rlcompleter.py:189: UserWarning: Tensor.mH is deprecated on 0-D tensors. Consider using x.conj(). (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3754.)
if (value := getattr(thisobject, word, None)) is not None:
/usr/lib/python3.12/rlcompleter.py:189: UserWarning: Tensor.T is deprecated on 0-D tensors. This function is the identity in these cases. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3705.)
if (value := getattr(thisobject, word, None)) is not None:
File "<stdin>", line 1
t.
^
SyntaxError: invalid syntax
```
### Versions
Ubuntu 24.04, python 3.12.3, torch 2.4.0+cpu | needs reproduction,triaged,release notes: python_frontend | low | Critical |
2,519,814,834 | storybook | [Bug]: Releasing majors/minors fails to sync version files at the end of the workflow | ### Describe the bug
Releasing major/minors fails at the end of the CI workflow because it tries to sync the `version.json` files from `next-release` to `main` - but `main` is already up-to-date at this point, so there's nothing to push. This is a false-negative, everything is in order, it just needs to not attempt that last sync because it has already happened with a force-push in a previous step.
### Reproduction link
https://github.com/storybookjs/storybook/actions/runs/10812502523/job/29994286479
### Reproduction steps
See the workflow run for the 8.3.0 release:
https://github.com/storybookjs/storybook/actions/runs/10812502523/job/29994286479
| bug,build | low | Critical |
2,519,822,878 | PowerToys | Microsoft To Do can not be captured when minimized | ### Microsoft PowerToys version
0.84.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
Start Microsoft To Do (Microsoft Store Version) and minimize it.
Start a capture in "Workspaces". The App will not be found.

### ✔️ Expected Behavior
Capturable when minimized
### ❌ Actual Behavior
Only Capturable when maximized/on screen
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,519,877,835 | flutter | Web Integration Test hangs till default timeout of 20 minutes on macOS GitHub Actions Runner | ### Steps to reproduce
This bug is only reproducible on the macOS GitHub Actions Runner Image.
1. Minimal code reproducible repo: https://github.com/hrishikesh-kadam/repro_web_integration_test_timeout
2. Workflow files: [reproduce-stock.yml], [reproduce-all.yml]
3. Bug reproduced runs: [reproduce-stock-run], [reproduce-all-run]
[reproduce-stock.yml]: https://github.com/hrishikesh-kadam/repro_web_integration_test_timeout/blob/main/.github/workflows/reproduce-stock.yml
[reproduce-all.yml]: https://github.com/hrishikesh-kadam/repro_web_integration_test_timeout/blob/main/.github/workflows/reproduce-all.yml
[reproduce-stock-run]: https://github.com/hrishikesh-kadam/repro_web_integration_test_timeout/actions/runs/10811686290/job/29991616779
[reproduce-all-run]: https://github.com/hrishikesh-kadam/repro_web_integration_test_timeout/actions/runs/10811686159/job/29991617797
[reproduce-stock-runs]: https://github.com/hrishikesh-kadam/repro_web_integration_test_timeout/actions/workflows/reproduce-stock.yml
[reproduce-all-runs]: https://github.com/hrishikesh-kadam/repro_web_integration_test_timeout/actions/workflows/reproduce-all.yml
[Expected result run]: https://github.com/hrishikesh-kadam/space_data_explorer/actions/runs/10554873304/job/29237445146#step:11:552
[Actual result run]: https://github.com/hrishikesh-kadam/space_data_explorer/actions/runs/10749373345/job/29814199834
[127]: https://github.com/hrishikesh-kadam/space_data_explorer/actions/runs/10554873304/job/29237445146#step:11:525
[128]: https://github.com/hrishikesh-kadam/space_data_explorer/actions/runs/10749373345/job/29814199834#step:11:540
### Expected results
[Expected result run in minimal repro]
Web Integration Test was working fine until recently on my portfolio project: https://github.com/hrishikesh-kadam/space_data_explorer
[Expected result run]
[Expected result run in minimal repro]: https://github.com/hrishikesh-kadam/repro_web_integration_test_timeout/actions/runs/12346007897/job/34450951856#step:4:90
[Expected result run]: https://github.com/hrishikesh-kadam/space_data_explorer/actions/runs/10554873304/job/29237445146#step:11:552
### Actual results
[Actual result run in minimal repro]
[Actual result run]
[Actual result run in minimal repro]: https://github.com/hrishikesh-kadam/repro_web_integration_test_timeout/actions/runs/12346007897/job/34450952037
[Actual result run]: https://github.com/hrishikesh-kadam/space_data_explorer/actions/runs/10749373345/job/29814199834
The noticeable differences between the [Expected result run] and the [Actual result run] are:
- Google Chrome and Chromedriver versions [127], [128].
### Code sample
Explained in **`Steps to reproduce`**
### Screenshots or Video
NA
### Logs
<details open><summary>Logs</summary>
```console
Starting ChromeDriver 128.0.6613.119 (6e439cfca4deda5954b0c74cde9b521c03cb31ad-refs/branch-heads/6613@{#1464}) on port 4444
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully on port 4444.
Downloading Web SDK... 2,025ms
Resolving dependencies...
Downloading packages...
_fe_analyzer_shared 72.0.0 (74.0.0 available)
_flutterfire_internals 1.3.40 (1.3.41 available)
analyzer 6.7.0 (6.9.0 available)
collection 1.18.0 (1.19.0 available)
dio 5.6.0 (5.7.0 available)
firebase_analytics 11.2.1 (11.3.0 available)
firebase_analytics_platform_interface 4.2.1 (4.2.2 available)
firebase_analytics_web 0.5.9+1 (0.5.9+2 available)
firebase_core 3.3.0 (3.4.0 available)
firebase_core_platform_interface 5.2.0 (5.2.1 available)
firebase_core_web 2.17.4 (2.17.5 available)
firebase_crashlytics 4.0.4 (4.1.0 available)
firebase_crashlytics_platform_interface 3.6.40 (3.6.41 available)
flex_seed_scheme 1.5.0 (3.2.0 available)
http_parser 4.0.2 (4.1.0 available)
injector 3.0.0 (4.0.0 available)
leak_tracker 10.0.5 (10.0.7 available)
leak_tracker_flutter_testing 3.0.5 (3.0.8 available)
macros 0.1.2-main.4 (0.1.3-main.0 available)
material_color_utilities 0.11.1 (0.12.0 available)
mime 1.0.5 (1.0.6 available)
sentry 8.7.0 (8.8.0 available)
sentry_flutter 8.7.0 (8.8.0 available)
shelf 1.4.1 (1.4.2 available)
string_scanner 1.2.0 (1.3.0 available)
test_api 0.7.2 (0.7.3 available)
uuid 4.4.2 (4.5.0 available)
web 0.5.1 (1.0.0 available)
Got dependencies!
28 packages have newer versions incompatible with dependency constraints.
Try `flutter pub outdated` for more information.
Launching integration_test/app_bar_back_button_test.dart on Web Server in debug mode...
Waiting for connection from debug service on Web Server... 29.1s
integration_test/app_bar_back_button_test.dart is being served at http://localhost:49699
The web-server device requires the Dart Debug Chrome extension for debugging. Consider using the Chrome or Edge devices for an improved development workflow.
Unhandled exception:
DriverError: Error while reading FlutterDriver result for command: window.$flutterDriver('{"command":"request_data","timeout":"1200000"}')
Original error: Exception: Expected: not null
Actual: <null>
Original stack trace:
#0 _matcherExpect (package:webdriver/support/async.dart:92:3)
#1 Clock.waitFor (package:webdriver/support/async.dart:60:11)
<asynchronous suspension>
#2 FlutterWebConnection.sendCommand (package:flutter_driver/src/driver/web_driver.dart:327:30)
<asynchronous suspension>
#3 WebFlutterDriver.sendCommand (package:flutter_driver/src/driver/web_driver.dart:123:14)
<asynchronous suspension>
#4 FlutterDriver.requestData (package:flutter_driver/src/driver/driver.dart:[540](https://github.com/hrishikesh-kadam/space_data_explorer/actions/runs/10749373345/job/29814199834#step:11:541):39)
<asynchronous suspension>
#5 integrationDriver (package:integration_test/integration_test_driver.dart:74:29)
<asynchronous suspension>
#0 FlutterWebConnection.sendCommand (package:flutter_driver/src/driver/web_driver.dart:338:7)
<asynchronous suspension>
#1 WebFlutterDriver.sendCommand (package:flutter_driver/src/driver/web_driver.dart:123:14)
<asynchronous suspension>
#2 FlutterDriver.requestData (package:flutter_driver/src/driver/driver.dart:540:39)
<asynchronous suspension>
#3 integrationDriver (package:integration_test/integration_test_driver.dart:74:29)
<asynchronous suspension>
Application finished.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
Please note that this is only reproducible on the macOS GitHub Actions Runner Image,
so pasting the `flutter --version` output
```console
Flutter 3.24.2 • channel stable • https://github.com/flutter/flutter
Framework • revision 4cf269e36d (3 days ago) • 2024-09-03 14:30:00 -0700
Engine • revision a6bd3f1de1
Tools • Dart 3.5.2 • DevTools 2.37.2
```
</details>
### Similar Issues
- https://github.com/flutter/flutter/issues/148982
- https://github.com/flutter/website/issues/11117
- https://stackoverflow.com/questions/78524233/flutter-drive-hanging-on-github-actions-with-no-output
### References
It would be great to know from the relevant maintainer of [firebase/flutterfire] what extra steps in [web.yaml][firebase/flutterfire web.yaml] are making them run the `flutter drive` command without timeout for `--device-id chrome`.
[firebase/flutterfire]: https://github.com/firebase/flutterfire
[firebase/flutterfire web.yaml]: https://github.com/firebase/flutterfire/blob/main/.github/workflows/web.yaml
| a: tests,tool,platform-mac,t: flutter driver,platform-web,f: integration_test,P2,team-web,triaged-web | low | Critical |
2,519,919,819 | ui | [bug]: App crashes when opening [tooltip chart] on mobile | ### Describe the bug
When accessing the chart[toolpit] on mobile devices, the app crashes immediately. The issue does not occur on desktop browsers. This bug appears consistently across multiple mobile devices and needs urgent attention.
https://github.com/user-attachments/assets/21a8cc83-7d10-4e84-b7a1-80329e6f58d3
### Affected component/components
chart
### How to reproduce
- Open the app on a mobile device [shacn](https://ui.shadcn.com)
- Navigate to [Chart] [toolpit section](https://ui.shadcn.com/charts#tooltip)
- Try opening the component on mobile device Observe the app crashing.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
this happens across various mobile device
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,519,957,003 | pytorch | Attributeless FakeRootModule | ### 🐛 Describe the bug
When running a segmentation model (UNet) from segmentation_models_pytorch using PyTorch's torch._dynamo, I encountered an internal error.
The line that triggers the issue is not the compiling method itself, but when for example I call the summary or when I start the training. For example:
summary(
model,
(Params.channels, *Params.image_reshape),
batch_size=Params.batch_size,
device=Params.device,
)
### Error logs
{
"name": "InternalTorchDynamoError",
"message": "'FakeRootModule' object has no attribute 'self___relu__forward_hooks_10___closure___1_cell_contents__Conv2d_1____nb_params'
Set TORCH_LOGS=\"+dynamo\" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
",
"stack": "---------------------------------------------------------------------------
InternalTorchDynamoError Traceback (most recent call last)
Cell In[14], line 2
1 # Print model summary
----> 2 summary(
3 model,
4 (Params.channels, *Params.image_reshape),
5 batch_size=Params.batch_size,
6 device=Params.device,
7 )
File ~/.local/lib/python3.12/site-packages/torchsummary/torchsummary.py:72, in summary(model, input_size, batch_size, device)
68 model.apply(register_hook)
70 # make a forward pass
71 # print(x.shape)
---> 72 model(*x)
74 # remove these hooks
75 for h in hooks:
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py:433, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
428 saved_dynamic_layer_stack_depth = (
429 torch._C._functorch.get_dynamic_layer_stack_depth()
430 )
432 try:
--> 433 return fn(*args, **kwargs)
434 finally:
435 # Restore the dynamic layer stack depth if necessary.
436 torch._C._functorch.pop_dynamic_layer_stack_and_undo_to_depth(
437 saved_dynamic_layer_stack_depth
438 )
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1603, in Module._call_impl(self, *args, **kwargs)
1600 bw_hook = hooks.BackwardHook(self, full_backward_hooks, backward_pre_hooks)
1601 args = bw_hook.setup_input_hook(args)
-> 1603 result = forward_call(*args, **kwargs)
1604 if _global_forward_hooks or self._forward_hooks:
1605 for hook_id, hook in (
1606 *_global_forward_hooks.items(),
1607 *self._forward_hooks.items(),
1608 ):
1609 # mark that always called hook is run
File ~/.local/lib/python3.12/site-packages/segmentation_models_pytorch/base/model.py:33, in SegmentationModel.forward(self, x)
23 new_w = (
24 (w // output_stride + 1) * output_stride
25 if w % output_stride != 0
26 else w
27 )
28 raise RuntimeError(
29 f\"Wrong input shape height={h}, width={w}. Expected image height and width \"
30 f\"divisible by {output_stride}. Consider pad your images to shape ({new_h}, {new_w}).\"
31 )
---> 33 def forward(self, x):
34 \"\"\"Sequentially pass `x` trough model`s encoder, decoder and heads\"\"\"
36 self.check_input_shape(x)
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1603, in Module._call_impl(self, *args, **kwargs)
1600 bw_hook = hooks.BackwardHook(self, full_backward_hooks, backward_pre_hooks)
1601 args = bw_hook.setup_input_hook(args)
-> 1603 result = forward_call(*args, **kwargs)
1604 if _global_forward_hooks or self._forward_hooks:
1605 for hook_id, hook in (
1606 *_global_forward_hooks.items(),
1607 *self._forward_hooks.items(),
1608 ):
1609 # mark that always called hook is run
File ~/.local/lib/python3.12/site-packages/segmentation_models_pytorch/encoders/resnet.py:58, in ResNetEncoder.forward(self, x)
48 def get_stages(self):
49 return [
50 nn.Identity(),
51 nn.Sequential(self.conv1, self.bn1, self.relu),
(...)
55 self.layer4,
56 ]
---> 58 def forward(self, x):
59 stages = self.get_stages()
61 features = []
File ~/.local/lib/python3.12/site-packages/segmentation_models_pytorch/encoders/resnet.py:63, in torch_dynamo_resume_in_forward_at_59(___stack0, self, x)
61 features = []
62 for i in range(self._depth + 1):
---> 63 x = stages[i](x)
64 features.append(x)
66 return features
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/container.py:219, in Sequential.forward(self, input)
217 def forward(self, input):
218 for module in self:
--> 219 input = module(input)
220 return input
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/container.py:219, in Sequential.forward(self, input)
217 def forward(self, input):
218 for module in self:
--> 219 input = module(input)
220 return input
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1603, in Module._call_impl(self, *args, **kwargs)
1600 bw_hook = hooks.BackwardHook(self, full_backward_hooks, backward_pre_hooks)
1601 args = bw_hook.setup_input_hook(args)
-> 1603 result = forward_call(*args, **kwargs)
1604 if _global_forward_hooks or self._forward_hooks:
1605 for hook_id, hook in (
1606 *_global_forward_hooks.items(),
1607 *self._forward_hooks.items(),
1608 ):
1609 # mark that always called hook is run
File ~/.local/lib/python3.12/site-packages/torchvision/models/resnet.py:143, in Bottleneck.forward(self, x)
140 self.downsample = downsample
141 self.stride = stride
--> 143 def forward(self, x: Tensor) -> Tensor:
144 identity = x
146 out = self.conv1(x)
File ~/.local/lib/python3.12/site-packages/torchvision/models/resnet.py:146, in torch_dynamo_resume_in_forward_at_146(___stack0, self, x, identity)
143 def forward(self, x: Tensor) -> Tensor:
144 identity = x
--> 146 out = self.conv1(x)
147 out = self.bn1(out)
148 out = self.relu(out)
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:1116, in CatchErrorsWrapper.__call__(self, frame, cache_entry, frame_state)
1110 return hijacked_callback(
1111 frame, cache_entry, self.hooks, frame_state
1112 )
1114 with compile_lock, _disable_current_modes():
1115 # skip=1: skip this frame
-> 1116 return self._torchdynamo_orig_callable(
1117 frame, cache_entry, self.hooks, frame_state, skip=1
1118 )
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:948, in ConvertFrame.__call__(self, frame, cache_entry, hooks, frame_state, skip)
946 counters[\"frames\"][\"total\"] += 1
947 try:
--> 948 result = self._inner_convert(
949 frame, cache_entry, hooks, frame_state, skip=skip + 1
950 )
951 counters[\"frames\"][\"ok\"] += 1
952 return result
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:472, in ConvertFrameAssert.__call__(self, frame, cache_entry, hooks, frame_state, skip)
458 compile_id = CompileId(frame_id, frame_compile_id)
460 signpost_event(
461 \"dynamo\",
462 \"_convert_frame_assert._compile\",
(...)
469 },
470 )
--> 472 return _compile(
473 frame.f_code,
474 frame.f_globals,
475 frame.f_locals,
476 frame.f_builtins,
477 self._torchdynamo_orig_callable,
478 self._one_graph,
479 self._export,
480 self._export_constraints,
481 hooks,
482 cache_entry,
483 cache_size,
484 frame,
485 frame_state=frame_state,
486 compile_id=compile_id,
487 skip=skip + 1,
488 )
File ~/.local/lib/python3.12/site-packages/torch/_utils_internal.py:84, in compile_time_strobelight_meta.<locals>.compile_time_strobelight_meta_inner.<locals>.wrapper_function(*args, **kwargs)
82 if \"skip\" in kwargs:
83 kwargs[\"skip\"] = kwargs[\"skip\"] + 1
---> 84 return StrobelightCompileTimeProfiler.profile_compile_time(
85 function, phase_name, *args, **kwargs
86 )
File ~/.local/lib/python3.12/site-packages/torch/_strobelight/compile_time_profiler.py:129, in StrobelightCompileTimeProfiler.profile_compile_time(cls, func, phase_name, *args, **kwargs)
124 @classmethod
125 def profile_compile_time(
126 cls, func: Any, phase_name: str, *args: Any, **kwargs: Any
127 ) -> Any:
128 if not cls.enabled:
--> 129 return func(*args, **kwargs)
131 if cls.profiler is None:
132 logger.error(\"profiler is not set\")
File /usr/local/lib/python3.12/contextlib.py:81, in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
78 @wraps(func)
79 def inner(*args, **kwds):
80 with self._recreate_cm():
---> 81 return func(*args, **kwds)
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:846, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_entry, cache_size, frame, frame_state, compile_id, skip)
844 fail_user_frame_lineno = e.innermost_user_frame_summary.lineno # type: ignore[attr-defined]
845 e.compile_id = compile_id # type: ignore[attr-defined]
--> 846 raise InternalTorchDynamoError(str(e)).with_traceback(
847 e.__traceback__
848 ) from None
849 finally:
850 if tracer:
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:817, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_entry, cache_size, frame, frame_state, compile_id, skip)
815 guarded_code = None
816 try:
--> 817 guarded_code = compile_inner(code, one_graph, hooks, transform)
818 return guarded_code
819 except (
820 Unsupported,
821 TorchRuntimeError,
(...)
828 BisectValidationException,
829 ) as e:
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/utils.py:231, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs)
229 with torch.profiler.record_function(f\"{key} (dynamo_timed)\"):
230 t0 = time.time()
--> 231 r = func(*args, **kwargs)
232 time_spent = time.time() - t0
233 compilation_time_metrics[key].append(time_spent)
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:636, in _compile.<locals>.compile_inner(code, one_graph, hooks, transform)
634 CompileContext.get().attempt = attempt
635 try:
--> 636 out_code = transform_code_object(code, transform)
637 break
638 except exc.RestartAnalysis as e:
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py:1185, in transform_code_object(code, transformations, safe)
1182 instructions = cleaned_instructions(code, safe)
1183 propagate_line_nums(instructions)
-> 1185 transformations(instructions, code_options)
1186 return clean_and_assemble_instructions(instructions, keys, code_options)[1]
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:178, in preserve_global_state.<locals>._fn(*args, **kwargs)
176 cleanup = setup_compile_debug()
177 try:
--> 178 return fn(*args, **kwargs)
179 finally:
180 cleanup.close()
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:582, in _compile.<locals>.transform(instructions, code_options)
580 try:
581 with tracing(tracer.output.tracing_context), tracer.set_current_tx():
--> 582 tracer.run()
583 except exc.UnspecializeRestartAnalysis:
584 speculation_log.clear()
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:2451, in InstructionTranslator.run(self)
2450 def run(self):
-> 2451 super().run()
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
891 try:
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
895 except BackendCompilerFailed:
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
802 self.update_block_stack(inst)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
807 except exc.ObservedException:
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:497, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
495 if speculation.failed:
496 assert speculation.reason is not None
--> 497 return handle_graph_break(self, inst, speculation.reason)
498 try:
499 return inner_fn(self, inst)
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:566, in break_graph_if_unsupported.<locals>.decorator.<locals>.handle_graph_break(self, inst, reason)
561 def handle_graph_break(
562 self: \"InstructionTranslatorBase\",
563 inst: Instruction,
564 reason: GraphCompileReason,
565 ):
--> 566 self.output.compile_subgraph(self, reason=reason)
567 cg = PyCodegen(self)
568 cleanup: List[Instruction] = []
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/output_graph.py:1123, in OutputGraph.compile_subgraph(self, tx, partial_convert, reason)
1120 output = []
1121 if count_calls(self.graph) != 0 or len(pass2.graph_outputs) != 0:
1122 output.extend(
-> 1123 self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
1124 )
1126 if len(pass2.graph_outputs) != 0:
1127 output.append(pass2.create_store(graph_output_var))
File /usr/local/lib/python3.12/contextlib.py:81, in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
78 @wraps(func)
79 def inner(*args, **kwds):
80 with self._recreate_cm():
---> 81 return func(*args, **kwds)
File ~/.local/lib/python3.12/site-packages/torch/_dynamo/output_graph.py:1269, in OutputGraph.compile_and_call_fx_graph(self, tx, rv, root)
1261 self.create_node(
1262 \"output\",
1263 \"output\",
1264 (self.current_tracer.create_arg(tuple(x.as_proxy() for x in rv)),),
1265 {},
1266 )
1267 if not config.do_not_emit_runtime_asserts:
1268 insert_deferred_runtime_asserts(
-> 1269 fx.GraphModule(root, self.graph),
1270 self.shape_env,
1271 name,
1272 )
1273 # NB: deferred runtime asserts can keep graphargs live, so make sure
1274 # those are inserted before pruning
1275 self.remove_unused_graphargs()
File ~/.local/lib/python3.12/site-packages/torch/fx/graph_module.py:399, in GraphModule.__init__(self, root, graph, class_name)
397 if node.op in [\"get_attr\", \"call_module\"]:
398 assert isinstance(node.target, str)
--> 399 _copy_attr(root, self, node.target)
400 elif isinstance(root, dict):
401 targets_to_copy = []
File ~/.local/lib/python3.12/site-packages/torch/fx/graph_module.py:229, in _copy_attr(from_module, to_module, target)
226 setattr(to_module, item, t)
227 from_module, to_module = f, t
--> 229 orig = getattr(from_module, field)
230 # If it is a tensor and not a parameter attribute of a module, it should be a named buffer.
231 # So, we register it as a named buffer in the target module.
232 if isinstance(orig, torch.Tensor) and not isinstance(orig, torch.nn.Parameter):
File ~/.local/lib/python3.12/site-packages/torch/nn/modules/module.py:1729, in Module.__getattr__(self, name)
1727 if name in modules:
1728 return modules[name]
-> 1729 raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")
InternalTorchDynamoError: 'FakeRootModule' object has no attribute 'self___relu__forward_hooks_10___closure___1_cell_contents__Conv2d_1____nb_params'
Set TORCH_LOGS=\"+dynamo\" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
"
}
### Minified repro
_No response_
### Versions
OS: (sysname='Linux', release='6.10.6-2-liquorix-amd64', version='#1 ZEN SMP PREEMPT liquorix 6.10-6ubuntu1~jammy (2024-08-20)', machine='x86_64')
__Python VERSION: 3.12.3 (main, Sep 4 2024, 12:08:24) [GCC 13.2.0]
Python Version: 3.12.3 (main, Sep 4 2024, 12:08:24) [GCC 13.2.0]
PyTorch Version: 2.4.1+cu124
CUDA Version: 12.6
GPU: NVIDIA GeForce GTX 1650 Ti
NVIDIA Driver Version: 560.35.03
Active CUDA Device: GPU 0
Number of CUDA Devices: 1
Available Devices: 1
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Aug_14_10:10:22_PDT_2024
Cuda compilation tools, release 12.6, V12.6.68
Build cuda_12.6.r12.6/compiler.34714021_0
Collect output:
<pre> % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 23357 100 23357 0 0 38472 0 --:--:-- --:--:-- --:--:-- 38416
</pre>
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,oncall: pt2,module: dynamo | low | Critical |
2,519,960,200 | react-native | Extra line wrap on some Android devices on some combinations of lineHeight, letterSpacing and other related properties | ### Description
On some Android devices (for example Redmi 10C or Pixel 6 Pro) text is a measuring bug when setting both lineHeight and letterSpacing.
Some specific combinations of fontScale, lineHeight, maxFontSizeMultiplier, letterSpacing, fontFamily and also on the text causes extra line wrap.
In our case, it was possible to "fix" it by removing maxFontSizeMultiplier or letterSpacing or lineHeight.
But as a workaround we set allowFontScaling={false} and do the font scaling manually (setting fontSize and lineHeight by the fontScale).
<img width="161" alt="image" src="https://github.com/user-attachments/assets/8516e5a6-5f6c-431e-b643-9a24cb6a2e6c">
Already reported https://github.com/facebook/react-native/issues/35039 but closed
### Steps to reproduce
1. Open Snack link https://snack.expo.dev/@matejkriztrezor/line-break-bug-demo
1. Choose Android
1. Switch to Pixel 6 Pro
1. Launch Snack
### React Native Version
0.75.2
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU: (8) arm64 Apple M1 Pro
Memory: 144.11 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.12.2
path: ~/.nvm/versions/node/v20.12.2/bin/node
Yarn:
version: 4.2.2
path: /opt/homebrew/bin/yarn
npm:
version: 10.5.0
path: ~/.nvm/versions/node/v20.12.2/bin/npm
Watchman:
version: 2024.04.08.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.12.1
path: /Users/matejkriz/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK:
API Levels:
- "29"
- "30"
- "31"
- "33"
- "34"
Build Tools:
- 29.0.2
- 30.0.2
- 30.0.3
- 31.0.0
- 33.0.0
- 33.0.1
- 33.0.2
- 34.0.0
System Images:
- android-28 | Google ARM64-V8a Play ARM 64 v8a
- android-29 | Google Play ARM 64 v8a
- android-31 | Google Play ARM 64 v8a
- android-33 | Google APIs ARM 64 v8a
- android-33 | Google Play ARM 64 v8a
- android-34 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 3.2.1
path: /Users/matejkriz/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react: Not Found
react-native: Not Found
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
there is no crash, only UI bug
```
### Reproducer
https://snack.expo.dev/@matejkriztrezor/line-break-bug-demo?platform=android
```javascript
import React from 'react';
import {Text, View} from 'react-native';
export default () => (
<View style={{flexDirection: 'row'}}>
<Text
style={{
letterSpacing: 1.6,
lineHeight: 21,
}}>
0123456789
</Text>
</View>
);
```
### Screenshots and Videos
<img width="488" alt="image" src="https://github.com/user-attachments/assets/2c1b2778-e508-434e-97eb-752951f23eab">
| Issue: Author Provided Repro,Platform: Android | low | Critical |
2,519,974,674 | flutter | `MouseRegion` callbacks are triggered unexpectedly on Android when tapped | ### Steps to reproduce
When trying to fix https://github.com/flutter/flutter/issues/154842, I noticed that issue is caused misfired `MouseRegion` callbacks on Android. While testing on iOS, these mouse region callbacks are not called, as expected.
Try a sample code below with `MouseRegion`
### Android
https://github.com/user-attachments/assets/83b5c5c9-193d-4d6e-995b-c56c44ad35da
### iOS
https://github.com/user-attachments/assets/e4e885b0-c12c-44a0-a5d6-d5c66736d443
### Expected results
Expect same behavior on both mobile platforms
### Actual results
Mouse region callbacks are triggered on only Android
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/gestures.dart';
import 'package:flutter/material.dart';
void main() => runApp(const MyApp());
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
int onHoverCount = 0;
@override
Widget build(BuildContext context) {
return MaterialApp(
debugShowCheckedModeBanner: false,
home: Scaffold(
appBar: AppBar(
title: const Text('Sample'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text('onHover called $onHoverCount times', style: TextStyle(fontSize: 20)),
MouseRegion(
onHover: (PointerHoverEvent event) {
print('onHover');
setState(() {
onHoverCount++;
});
},
hitTestBehavior: HitTestBehavior.opaque,
child: const FlutterLogo(size: 100),
),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
This is reproducible as far back as Flutter 3.0 | platform-android,framework,a: tablet,f: gestures,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.26 | low | Critical |
2,520,062,892 | puppeteer | [Bug]: PDF rendering looks crooked | ### Minimal, reproducible example
```TypeScript
import puppeteer from "puppeteer";
import fs from 'fs';
const browser = await puppeteer.launch();
const VIEWPORT_OPTIONS = {
width: 1200,
height: 800,
deviceScaleFactor: 2,
};
const HTML_PATH = 'source.html';
const PDF_OPTIONS = {
printBackground: true,
format: 'A4',
path: 'crooked.pdf'
};
const IMAGE_OPTIONS = {
fullPage: true,
path: 'ok.png',
};
const renderedContent =
`<!DOCTYPE html>
<html>
<head>
<style>
body {
margin: 0;
padding: 2em;
width: 100vw;
height: 100vh;
image-rendering: pixelated;
background: repeating-conic-gradient(#ccc 0% 25%, #fff 0% 50%) 50% / 20px 20px;
}
.rating-circle {
--background-color: red;
display: inline-block;
position: relative;
padding: 0.5em;
margin-right: 0.5em;
z-index: 1;
font-weight: 700;
font-size: 0.7em;
line-height: 0;
color: white;
text-align: center;
}
.rating-circle::before {
content: "";
display: block;
position: absolute;
z-index: -1;
background: var(--background-color);
width: 100%;
min-width: 2em;
aspect-ratio: 1/1;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
border-radius: 50%;
}
</style>
</head>
<body>
Check the differences between <u>chrome-pdf</u>, <u>chrome-image</u> and <u>html</u> rendering
<span class="rating-circle" style="color: white; --background-color: red">1</span>
<span class="rating-circle" style="color: white; --background-color: green">2</span>
<span class="rating-circle" style="color: white; --background-color: blue">3</span>
<dl>
<dt><strong>in chrome-pdf</strong></dt>
<dd>you will see strange artifact lines appearing, they change depending on the <em>scale</em> which you can edit in the chrome pdf settings tab</dd>
<dt><strong>in html & chrome-image</strong></dt>
<dd>everything looks smooth and square</dd>
</dl>
</body>
</html>`;
// just to verify that the html view is correct
fs.writeFileSync(HTML_PATH, renderedContent);
const page = await browser.newPage();
await page.setViewport(VIEWPORT_OPTIONS);
await page.setContent(renderedContent);
await page.pdf(PDF_OPTIONS);
await page.screenshot(IMAGE_OPTIONS);
await browser.close();
```
### Background
Coming over from https://github.com/jsreport/jsreport/issues/1168 I kindly want to raise a bug since I can eliminate the jsreport dependency and only reproduce it using puppeteer.
### Expectation
The background should look the same (checkerboard with gray squares) no matter the output format. The rating circles should have a perfect circle with a centered label in it.
### Reality
In HTML and in the screenshot png everything looks as expected, but the PDF looks crooked.
The rectangles get weird and also it seems like centered objects to not get subpixel precision centering. Notice the .rating-circles in html and png their content is perfectly centered, in the pdf it gets drawn to the bottom left around 1px.
### Puppeteer configuration file (if used)
_No response_
### Puppeteer version
23.3.0
### Node version
v20.17.0
### Package manager
npm
### Package manager version
10.8.3
### Operating system
Windows | bug,upstream,confirmed,P3,chrome | low | Critical |
2,520,085,133 | terminal | Allow remapping of experimental.repositionCursorWithMouse to use key bindings such as Alt + Left Click | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
I'm aware that as of now, by setting `experimental.repositionCursorWithMouse` as `true`, we are able to move the cursor to the mouse's position with a simple left click. However, there are times when I want to simply click on the terminal to get it into focus without moving the cursor around. Hence, I think it would be best to let the user choose what keys would be best for them to utilize this feature, for eg. Alt+Left Click/Ctrl+Left Click/Middle Click.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed technical implementation details (optional)
I'm aware that mouse key bindings are currently not available yet as explained in #1553. However, once they are available, I'd really love it if we can have a command such as `moveCursorWithMouse` and not just `experimental.repositionCursorWithMouse` as a profile setting just like how we have [`showContextMenu`](https://learn.microsoft.com/en-us/windows/terminal/customize-settings/actions#open-context-menu) as a command that can be mapped to a key binding and not just the profile setting `experimental.rightClickContextMenu`.
So maybe we could have something like this in the future:
```json
{ "command": "moveCursorWithMouse", "keys": "alt+left_click" }
```
<!--
A clear and concise description of what you want to happen.
--> | Issue-Feature,Area-Settings,Product-Terminal | low | Critical |
2,520,106,390 | PowerToys | Wrong offset of the window when closing crop and lock. | ### Microsoft PowerToys version
0.84.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Crop and Lock
### Steps to reproduce
1. Select the area to be cropped.

2. The area is cropped and locked ( in a strange way).

3.When i close the cropped window the screen offset of the original window is broken (not the hitting area, infact to close the window i have to click on the top right corner in the black area.

### ✔️ Expected Behavior
closing the crop and lock window everything goes back to how it was before
### ❌ Actual Behavior
the offset of the window i cropped is wrong and i have to close the window.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,520,161,358 | PowerToys | Workspaces & Splash Screens | ### Microsoft PowerToys version
0.84.1
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
Setup a workspace to open an program that has a start-up splash screen, when the workspace is activated PowerToys Workspace tries to size and move the splash screen and fails and then the application opens to the wrong size and in the wrong place!
### ✔️ Expected Behavior
To open the application, wait for the splash screen to close and then move the application window to the required size and place.
### ❌ Actual Behavior
When the workspace is activated PowerToys Workspace tries to size and move the splash screen and fails and then the application opens to the wrong size and in the wrong place!
### Other Software
_No response_ | Issue-Bug,Product-Workspaces | low | Major |
2,520,167,827 | vscode | Find in the multi-diff editor is limited to the focused file | The find in the multi-diff editor should be able to search across all the files in the editor.
I feel like this used to work, but I could be wrong. | feature-request,multi-diff-editor | low | Minor |
2,520,180,382 | pytorch | [feature request] Varlen indexing function for lookup and concat of varlen BPE tokens from a tensor vocab (i.e. `detokenize(...)` and arrays of strings) | ### 🚀 The feature, motivation and pitch
In BPE decoding we often have a tensor of token ids and a vocab storing an array of strings where elements may have a variable length. Decoding in the basic way corresponds to looping over the ids, retrieving the token offset and token size and then indexing into the vocab and concatting. In practice, this is useful for embedding BPE detokenizer into the hermetic ONNX model and maybe slightly speeding up BPE detokenization (e.g. for ASR).
If we have NJT representing an array of strings, the hack below is simply indexing + flattening. Maybe this can give an impulse to representing strings / arrays of strings in bare PyTorch (maybe as NJT) and how these can get lowered to ONNX / other simpler opsets if they do not support arrayss of strings. Sometimes it might be god to expose publicly the lower-level NJT-like working functions (accepting/filling offsets/lengths).
I propose to add some function for doing this in a loopless way, this function already exists in ONNX as https://github.com/onnx/onnx/blob/main/docs/Operators-ml.md#aionnxmlarrayfeatureextractor (although not sure if this supports string-valued tensors cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @justinchuby, but it appears so )
This is also very related to handling/encoding of varlen tensors like nested/jagged tensors. Ideally, it could be represented as some function indexing into a NJT which could present an array of strings. Especially if we want to do it in a batched way, accepting a list of token id arrays as input and producing an array of varlen byte strings as output.
Even if such function higher-level function is not added, some lower level / "plumbing" varlen+concat functions might be useful which would help constructing index arrays: batched varlen arange, or batched slice.
This is related to:
- Standardized ways of representing/storing strings in tensor:
- https://github.com/pytorch/pytorch/issues/101699
- repeat_interleave which can repeat with different lengths and concat
- batched arange (where start/end vary across the batch and even `end - start` can vary)
- batched varlen slicing + cat
- nested/jagged tensors (because both strings and tokens and even unicode characters are typically varlen)
I implemented it with crazy hacks using `repeat_interleave`, `cumsum` and `gather` (in form of tensor indexing). But not sure if this gets exported well to ONNX. Hopefully this works now also for legacy torch.onnx.export:
- https://github.com/pytorch/pytorch/issues/128505
```python
# works only with a single, non-batched tensor of token ids
import torch
def bpedecode_loop(token_ids, token_utf8bytes, token_lens):
inds = torch.cat((torch.zeros_like(token_lens[:1]), token_lens.cumsum(-1)))
return torch.cat([token_utf8bytes[inds[i]:inds[i_p_1]] for i, i_p_1 in zip(token_ids, token_ids + 1)])
def bpedecode_vec(token_ids, token_utf8bytes, token_lens):
inds_start = torch.cat((torch.zeros_like(token_lens[:1]), token_lens[:-1].cumsum(-1)))
inds_end = inds_start + token_lens
starts = inds_start[token_ids]
ends = inds_end[token_ids]
lens = token_lens[token_ids]
ones = torch.ones_like(token_ids)
starts_shifted = torch.cat([starts[:1], starts[1:] - ends[:-1] + 1])
repeats = torch.stack([ones, lens - 1], dim = -1).flatten()
i = torch.stack([starts_shifted, ones], dim = -1).flatten()
I = i.repeat_interleave(repeats).cumsum(-1)
return token_utf8bytes[I]
if __name__ == '__main__':
token_ids = torch.tensor([1, 0, 1, 3], dtype = torch.int64)
token_utf8bytes = torch.tensor([1, 17, 31, 2, 2, 2, 2, 3, 7], dtype = torch.uint8)
token_lens = torch.tensor([1, 2, 4, 2], dtype = torch.int64)
print('loop:', bpedecode_loop(token_ids, token_utf8bytes, token_lens))
print(' vec:', bpedecode_vec (token_ids, token_utf8bytes, token_lens))
```
### Alternatives
_No response_
### Additional context
_No response_ | triaged,module: nestedtensor | low | Minor |
2,520,205,072 | pytorch | Multiprocessing with workers returning `torch.Tensor` and limiting of the number of tasks per worker (`maxtasksperchild=1`) hangs | ### 🐛 Describe the bug
When running the below code the python interpreter hangs:
```
from multiprocessing import get_context
from torch import randn
def worker(i):
result = randn(1)
print(f'Task {i} result is {result}')
# Doesn't work
return result
# # This works
# return result.detach().numpy()
if __name__ == '__main__':
# Doesn't work
with get_context('fork').Pool(1, maxtasksperchild=1) as exe:
# # This works
# with get_context('fork').Pool(1, maxtasksperchild=None) as exe:
results = exe.map(worker, range(10), chunksize=1)
```
The printouts to the terminal show that all the workers have completed, so the problem is related to collecting the data returned from the workers.
Note that if we return a `numpy.array` instead of a `torch.Tensor` or omit the `maxtasksperchild` parameter (its default value is `None`) everything works without hanging.
### Versions
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-1260P
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 3
BogoMIPS: 4992.01
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==1.11.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.5.0
[pip3] torch==2.4.0
[pip3] torch-arima==0.0.4
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==2.3.0
[conda] _anaconda_depends 2024.02 py311_mkl_1
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.26.4 py311h08b1b3b_0
[conda] numpy-base 1.26.4 py311hf175353_0
[conda] numpydoc 1.5.0 py311h06a4308_0
[conda] pytorch 2.4.0 py3.11_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-arima 0.0.4 pypi_0 pypi
[conda] torchaudio 2.4.0 py311_cpu pytorch
[conda] torchvision 0.19.0 py311_cpu pytorch
[conda] triton 2.3.0 pypi_0 pypi
cc @VitalyFedyunin | module: multiprocessing,triaged | low | Critical |
2,520,261,375 | pytorch | Documentation for `dynamic_shapes` argument of `export` is unclear about namedtuples | ### 📚 The doc issue
In the [documentation](https://pytorch.org/docs/stable/export.html#torch.export.export) for `torch.export.export`, the following sentence describes the form of the `dynamic_shapes` argument:
> Arguments that are dicts or tuples / lists of tensors are recursively specified by using mappings or sequences of contained specifications.
I did not understand the terminology "sequences of contained specifications" until I dug into the implementation and understood how Pytree was used to represent the inputs and outputs of functions. This caused me to spend a long time to learn how to mark a dimension of a particular field of a namedtuple as dynamic.
A small reproducible example of what I was attempting is below:
```
import torch
from typing import NamedTuple
class Data(NamedTuple):
x: torch.Tensor
torch.utils._pytree._register_namedtuple(Data, serialized_type_name="Data")
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.Linear(10, 16)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(16, 1)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, data: Data):
x = data.x
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
x = self.sigmoid(x)
return Data(x=x)
with torch.no_grad():
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Model().to(device=device)
batch_dim = torch.export.Dim("batch", min=1, max=1024)
so_path = torch.export.export(
model,
args=(),
kwargs={"data": Data(x=torch.randn(8, 10, device=device))},
# dynamic_shapes={"data": {"x": {0: batch_dim}}}, # ValueError: Node type mismatch; expected <function namedtuple at 0x7f7c46f5a700>, but got <class 'dict'>.
# dynamic_shapes={"data.x.0": batch_dim}, # ValueError: Node keys mismatch; missing key(s): {'data'}; extra key(s): {'data.x.0'}.
dynamic_shapes={"data": Data(x={0: batch_dim})} # Works without failing. Yay!
)
```
Because I misunderstood the documentation, my first attempts at expressing dynamism -- by using nested dictionaries and flattened keys -- failed with ValueErrors that did not very effectively guide me to restructure my `dynamic_shapes` argument. I wonder if this is an opportunity to improve the documentation, or simply an indication that namedtuples are not supported for exporting.
### Suggest a potential alternative/fix
Maybe we could add an example such as the one above to clarify how to mark certain fields of namedtuples as dynamic. Alternatively, we could reference the Pytree documentation to illustrate that "sequences of contained specifications" could refer to instances of previously-registered NamedTuple classes.
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,520,266,055 | flutter | Devicelab failure with log output "Failed to codesign" should be marked infra failure | Part of postmortem for https://github.com/flutter/flutter/issues/154881
For example: https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8737203335879841073/+/u/run_wide_gamut_ios/stdout
```
[2024-09-09 23:54:07.304784] [STDOUT] stdout: error: [ +8 ms] Target debug_unpack_ios failed: Exception: Failed to codesign /opt/s/w/ir/x/w/rc/tmpvakqwmos/flutter sdk/dev/integration_tests/wide_gamut_test/build/ios/Debug-iphoneos/Flutter.framework/Flutter with identity 6475AE66068783D9C7566E71522EA3915C7D6C9A.
[2024-09-09 23:54:07.304796] [STDOUT] stdout: /opt/s/w/ir/x/w/rc/tmpvakqwmos/flutter sdk/dev/integration_tests/wide_gamut_test/build/ios/Debug-iphoneos/Flutter.framework/Flutter: errSecInternalComponent
[2024-09-09 23:54:07.304806] [STDOUT] stdout:
[2024-09-09 23:54:07.304817] [STDOUT] stdout: #0 _signFramework (package:flutter_tools/src/build_system/targets/ios.dart:761:5)
```
| team-infra,P2,from: postmortem,triaged-infra | low | Critical |
2,520,284,212 | kubernetes | orphaned pod <uid> found, but failed to rmdir() volume at path ... | ### What happened?
After a k8s 1.28.9 node was rebooted the following errors started to be repeated in jounalctl:
`kubelet[2135]: E0911 16:27:10.436330 2135 kubelet_volumes.go:263] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"0a002fc3-b8f5-4a36-a8d2-e7a9c7bbe8ac\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/0a002fc3-b8f5-4a36-a8d2-e7a9c7bbe8ac/volumes/kubernetes.io~local-volume/local-pv-c20f968b: device or resource busy" numErrs=2`
The pod was running under a new uid and so both the old and new mounts were listed when I ran `mount | grep local-pv-c20f968b`:
```
/dev/mapper/apicSecureDisk on /var/lib/kubelet/pods/0a002fc3-b8f5-4a36-a8d2-e7a9c7bbe8ac/volumes/kubernetes.io~local-volume/local-pv-c20f968b type ext4 (rw,relatime)
/dev/mapper/apicSecureDisk on /data/secure/var/lib/kubelet/pods/0a002fc3-b8f5-4a36-a8d2-e7a9c7bbe8ac/volumes/kubernetes.io~local-volume/local-pv-c20f968b type ext4 (rw,relatime)
/dev/mapper/apicSecureDisk on /var/lib/kubelet/pods/3e64e4d8-f0b8-4031-a2c0-fa9cdcc86dd8/volumes/kubernetes.io~local-volume/local-pv-c20f968b type ext4 (rw,relatime)
/dev/mapper/apicSecureDisk on /data/secure/var/lib/kubelet/pods/3e64e4d8-f0b8-4031-a2c0-fa9cdcc86dd8/volumes/kubernetes.io~local-volume/local-pv-c20f968b type ext4 (rw,relatime)
```
In new pod any new directories that were created inside the double mounted volume got deleted within a few seconds of creation, so the new pod was throwing errors related to those directories vanishing. To fix the error and the pod I ran: `umount /var/lib/kubelet/pods/0a002fc3-b8f5-4a36-a8d2-e7a9c7bbe8ac/volumes/kubernetes.io~local-volume/local-pv-c20f968b` and then the issue was resolved.
### What did you expect to happen?
kubelet to unmount the orphaned pod directory without manual intervention
### How can we reproduce it (as minimally and precisely as possible)?
It does not happen everytime, but it does seem to only happen on node reboot. Perhaps it only happens with local storage as well.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.28.9
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.9
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.5 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.5 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ uname -a
Linux apicdev4010 5.4.0-195-generic #215-Ubuntu SMP Fri Aug 2 18:28:05 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,lifecycle/rotten,needs-triage | low | Critical |
2,520,326,038 | flutter | Widget recursion does not provide sufficient error information after crashing. | ### Steps to reproduce
This issue is platform agnostic.
1. Create a `MaterialApp` (other types would most likely work as this seems to be a stack overflow error which simply doesn't get caught by any analysis)
2. Create a Widget which contains `Scaffold` with a body of type `List<Widget>` and open any arbitrary value within the list.
3. Make sure that `List<Widget>` contains the class that it is inside.
### Expected results
Flutter should throw a clearer error which states that a stack overflow is occuring, or perhaps throw a recursion error.
### Actual results
Flutter throws the unclear `Failed assertion error`. DartPad seems to properly throw and report a stack overflow to the user, however.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const App());
}
class App extends StatelessWidget {
const App({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
debugShowCheckedModeBanner: false,
home: Home(),
);
}
}
class Home extends StatefulWidget {
@override
State<Home> createState() => _HomeState();
}
class _HomeState extends State<Home> {
int navigationIndex = 0;
List<Widget> navigationPages = [Home()];
@override
Widget build(BuildContext build) {
return Scaffold(
body: navigationPages[navigationIndex],
);
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
<details open><summary>Logs</summary>
[stacktrace.log](https://github.com/user-attachments/files/16967378/stacktrace.log)
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.2, on macOS 15.1 24B5035e darwin-arm64, locale en-US)
• Flutter version 3.24.2 on channel stable at /Users/ryan/.development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4cf269e36d (8 days ago), 2024-09-03 14:30:00 -0700
• Engine revision a6bd3f1de1
• Dart version 3.5.2
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/ryan/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Users/ryan/Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode-beta.app/Contents/Developer
• Build 16B5001e
✗ CocoaPods not installed.
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
For installation instructions, see https://guides.cocoapods.org/using/getting-started.html#installation
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Users/ryan/Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.2.1)
• IntelliJ at /Users/ryan/Applications/IntelliJ IDEA Ultimate.app
• Flutter plugin version 81.1.3
• Dart plugin version 242.21829.3
[✓] Connected device (4 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 • android-arm64 • Android 15 (API 35) (emulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1 24B5035e darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1 24B5035e darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.137
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| framework,a: error message,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.26 | low | Critical |
2,520,338,910 | go | x/telemetry/internal/telemetry: update `ProgramInfo` heuristic for go1.24 | In go1.24, the go command will start to stamp the main module version in the build info
even when the binary is built with `go build`. There are cases where the binary built
in this way can have different dependencies while having the same version string stamped
as the release versions. Adjust the heuristic to detect the case.
https://github.com/golang/telemetry/blob/6d9f2eb83631e39f6d0b87dd8411b8f5d7fac38e/internal/telemetry/proginfo.go#L27
cc @golang/telemetry | NeedsInvestigation,telemetry | low | Minor |
2,520,367,678 | flutter | [go_router] DropdownMenu behind NavigationBar | ### Steps to reproduce
1. Run build runner
2. Set the phone to landscape
3. Open DropdownMenu
### Expected results
The DropdownMenu should hover over the NavigationBar, or start from the NavigationBar up. It should behave similarly to when a DropdownMenu is near the AppBar, it never hovers over it.
### Actual results
The Dropdown Menu appears behind the NavigationBar hiding some options.
### Code sample (1) `shell_route_example.dart`
<details open><summary>Code sample</summary>
```dart
// Copyright 2013 The Flutter Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// ignore_for_file: public_member_api_docs, unreachable_from_main
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
part 'shell_route_example.g.dart';
void main() => runApp(App());
class App extends StatelessWidget {
App({super.key});
@override
Widget build(BuildContext context) => MaterialApp.router(
routerConfig: _router,
);
final GoRouter _router = GoRouter(
routes: $appRoutes,
initialLocation: '/notifications',
);
}
class HomeScreen extends StatelessWidget {
const HomeScreen({super.key});
@override
Widget build(BuildContext context) => Scaffold(
appBar: AppBar(title: const Text('Test')),
);
}
@TypedShellRoute<MyShellRouteData>(
routes: <TypedRoute<RouteData>>[
TypedGoRoute<NotificationRouteData>(path: '/notifications'),
TypedGoRoute<BarRouteData>(path: '/bar'),
],
)
class MyShellRouteData extends ShellRouteData {
const MyShellRouteData();
@override
Widget builder(
BuildContext context,
GoRouterState state,
Widget navigator,
) {
return MyShellRouteScreen(child: navigator);
}
}
class NotificationRouteData extends GoRouteData {
const NotificationRouteData();
@override
Widget build(BuildContext context, GoRouterState state) {
return const NotificationScreen();
}
}
class BarRouteData extends GoRouteData {
const BarRouteData();
@override
Widget build(BuildContext context, GoRouterState state) {
return const BarScreen();
}
}
class MyShellRouteScreen extends StatelessWidget {
const MyShellRouteScreen({required this.child, super.key});
final Widget child;
@override
Widget build(BuildContext context) {
return Scaffold(
extendBody: true,
body: child,
appBar: AppBar(title: const Text('Test')),
bottomNavigationBar: NavigationBar(
backgroundColor: Colors.blue.withOpacity(0.5),
labelBehavior: NavigationDestinationLabelBehavior.alwaysHide,
onDestinationSelected: (int index) {
if (index == 0) {
context.go('/notifications');
} else if (index == 1) {
context.go('/bar');
}
},
indicatorColor: Colors.amber,
destinations: const <Widget>[
NavigationDestination(
selectedIcon: Icon(Icons.home),
icon: Icon(Icons.home_outlined),
label: 'Home',
),
NavigationDestination(
icon: Badge(child: Icon(Icons.notifications_sharp)),
label: 'Notifications',
),
],
),
);
}
}
class NotificationScreen extends StatelessWidget {
const NotificationScreen({super.key});
@override
Widget build(BuildContext context) {
return const Card(
shadowColor: Colors.transparent,
margin: EdgeInsets.all(8.0),
child: SizedBox.expand(
child: Center(
child: Text('Home page'),
),
),
);
}
}
class BarScreen extends StatelessWidget {
const BarScreen({super.key});
@override
Widget build(BuildContext context) {
return ListView.builder(
itemCount: 20,
itemBuilder: (BuildContext context, int index) {
if (index == 10) {
return DropdownMenu(
menuStyle: const MenuStyle(
visualDensity: VisualDensity.standard,
alignment: AlignmentDirectional.topStart,
),
requestFocusOnTap: false,
expandedInsets: EdgeInsets.zero,
dropdownMenuEntries: List.generate(
4,
(index) {
return DropdownMenuEntry(
label: 'Entry $index',
value: 'Entry $index',
);
},
),
);
} else {
return Text(
'Hello $index',
);
}
},
);
}
}
```
</details>
### Code sample (1.1) `shell_route_example.g.dart`
<details open><summary>Code sample</summary>
```dart
// GENERATED CODE - DO NOT MODIFY BY HAND
part of 'shell_route_example.dart';
// **************************************************************************
// GoRouterGenerator
// **************************************************************************
List<RouteBase> get $appRoutes => [
$myShellRouteData,
];
RouteBase get $myShellRouteData => ShellRouteData.$route(
factory: $MyShellRouteDataExtension._fromState,
routes: [
GoRouteData.$route(
path: '/notifications',
factory: $FooRouteDataExtension._fromState,
),
GoRouteData.$route(
path: '/bar',
factory: $BarRouteDataExtension._fromState,
),
],
);
extension $MyShellRouteDataExtension on MyShellRouteData {
static MyShellRouteData _fromState(GoRouterState state) =>
const MyShellRouteData();
}
extension $FooRouteDataExtension on NotificationRouteData {
static NotificationRouteData _fromState(GoRouterState state) =>
const NotificationRouteData();
String get location => GoRouteData.$location(
'/notifications',
);
void go(BuildContext context) => context.go(location);
Future<T?> push<T>(BuildContext context) => context.push<T>(location);
void pushReplacement(BuildContext context) =>
context.pushReplacement(location);
void replace(BuildContext context) => context.replace(location);
}
extension $BarRouteDataExtension on BarRouteData {
static BarRouteData _fromState(GoRouterState state) => const BarRouteData();
String get location => GoRouteData.$location(
'/bar',
);
void go(BuildContext context) => context.go(location);
Future<T?> push<T>(BuildContext context) => context.push<T>(location);
void pushReplacement(BuildContext context) =>
context.pushReplacement(location);
void replace(BuildContext context) => context.replace(location);
}
```
</details>
### Screenshots or Video with simple go_router_builder
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
-------------------------------------------------------------
### UPDATE: The bug occurs with simple go_router.
### Code sample
<details open><summary>Code sample</summary>
```dart
// Copyright 2013 The Flutter Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// ignore_for_file: public_member_api_docs, unreachable_from_main
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
void main() => runApp(App());
class App extends StatelessWidget {
App({super.key});
@override
Widget build(BuildContext context) => MaterialApp.router(
routerConfig: _router,
);
final GoRouter _router = GoRouter(
initialLocation: '/notifications',
routes: [
ShellRoute(
builder: (context, state, child) {
return MyShellRouteScreen(child: child);
},
routes: [
GoRoute(
path: '/notifications',
builder: (context, state) => const NotificationScreen(),
),
GoRoute(
path: '/bar',
builder: (context, state) => const BarScreen(),
),
],
),
],
);
}
class MyShellRouteScreen extends StatelessWidget {
const MyShellRouteScreen({required this.child, super.key});
final Widget child;
@override
Widget build(BuildContext context) {
return Scaffold(
extendBody: true,
body: child,
appBar: AppBar(title: const Text('Test')),
bottomNavigationBar: NavigationBar(
backgroundColor: Colors.blue.withOpacity(0.5),
labelBehavior: NavigationDestinationLabelBehavior.alwaysHide,
onDestinationSelected: (int index) {
if (index == 0) {
context.go('/notifications');
} else if (index == 1) {
context.go('/bar');
}
},
indicatorColor: Colors.amber,
destinations: const <Widget>[
NavigationDestination(
selectedIcon: Icon(Icons.home),
icon: Icon(Icons.home_outlined),
label: 'Home',
),
NavigationDestination(
icon: Badge(child: Icon(Icons.notifications_sharp)),
label: 'Notifications',
),
],
),
);
}
}
class NotificationScreen extends StatelessWidget {
const NotificationScreen({super.key});
@override
Widget build(BuildContext context) {
return const Card(
shadowColor: Colors.transparent,
margin: EdgeInsets.all(8.0),
child: SizedBox.expand(
child: Center(
child: Text('Home page'),
),
),
);
}
}
class BarScreen extends StatelessWidget {
const BarScreen({super.key});
@override
Widget build(BuildContext context) {
return ListView.builder(
itemCount: 20,
itemBuilder: (BuildContext context, int index) {
if (index == 10) {
return DropdownMenu(
menuStyle: const MenuStyle(
visualDensity: VisualDensity.standard,
alignment: AlignmentDirectional.topStart,
),
requestFocusOnTap: false,
expandedInsets: EdgeInsets.zero,
dropdownMenuEntries: List.generate(
4,
(index) {
return DropdownMenuEntry(
label: 'Entry $index',
value: 'Entry $index',
);
},
),
);
} else {
return Text(
'Hello $index',
);
}
},
);
}
}
```
</details>
### Screenshots or Video with simple go_router
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
--------------------------------------------------------------------------------------------------
### UPDATE: The bug does not occur if we use a shell without using go_router
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(App());
class App extends StatelessWidget {
App({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: MyShellRouteScreen(),
);
}
}
class MyShellRouteScreen extends StatefulWidget {
const MyShellRouteScreen({super.key});
@override
_MyShellRouteScreenState createState() => _MyShellRouteScreenState();
}
class _MyShellRouteScreenState extends State<MyShellRouteScreen> {
int _selectedIndex = 0;
final List<Widget> _pages = const [
NotificationScreen(),
BarScreen(),
];
void _onItemTapped(int index) {
setState(() {
_selectedIndex = index;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Test')),
body: _pages[_selectedIndex],
bottomNavigationBar: NavigationBar(
backgroundColor: Colors.blue.withOpacity(0.5),
labelBehavior: NavigationDestinationLabelBehavior.alwaysHide,
selectedIndex: _selectedIndex,
indicatorColor: Colors.amber,
onDestinationSelected: _onItemTapped,
destinations: const [
NavigationDestination(
selectedIcon: Icon(Icons.home),
icon: Icon(Icons.home_outlined),
label: 'Home',
),
NavigationDestination(
icon: Badge(child: Icon(Icons.notifications_sharp)),
label: 'Notifications',
),
],
),
);
}
}
class NotificationScreen extends StatelessWidget {
const NotificationScreen({super.key});
@override
Widget build(BuildContext context) {
return const Card(
shadowColor: Colors.transparent,
margin: EdgeInsets.all(8.0),
child: SizedBox.expand(
child: Center(
child: Text('Home page'),
),
),
);
}
}
class BarScreen extends StatelessWidget {
const BarScreen({super.key});
@override
Widget build(BuildContext context) {
return ListView.builder(
itemCount: 20,
itemBuilder: (BuildContext context, int index) {
if (index == 10) {
return DropdownMenu<String>(
menuStyle: const MenuStyle(
visualDensity: VisualDensity.standard,
alignment: AlignmentDirectional.topStart,
),
requestFocusOnTap: false,
expandedInsets: EdgeInsets.zero,
dropdownMenuEntries: List.generate(4, (index) {
return DropdownMenuEntry(
label: 'Entry $index',
value: 'Entry $index',
);
}),
onSelected: (value) {
// Handle selection
},
);
} else {
return ListTile(
title: Text('Hello $index'),
);
}
},
);
}
}
```
</details>
### Screenshots or Video without go_router
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on Ubuntu 24.04.1 LTS 6.8.0-44-generic, locale en_US.UTF-8)
• Flutter version 3.24.1 on channel stable at /home/user/fvm/versions/3.24.1
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (3 weeks ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /../.../ANDROID_SDK
• Platform android-34, build-tools 34.0.0
• Java binary at: /home/user/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• Chrome at google-chrome
[✗] Linux toolchain - develop for Linux desktop
✗ clang++ is required for Linux development.
It is likely available from your distribution (e.g.: apt install clang), or can be downloaded from https://releases.llvm.org/
✗ CMake is required for Linux development.
It is likely available from your distribution (e.g.: apt install cmake), or can be downloaded from https://cmake.org/download/
✗ ninja is required for Linux development.
It is likely available from your distribution (e.g.: apt install ninja-build), or can be downloaded from https://github.com/ninja-build/ninja/releases
✗ pkg-config is required for Linux development.
It is likely available from your distribution (e.g.: apt install pkg-config), or can be downloaded from
https://www.freedesktop.org/wiki/Software/pkg-config/
[✓] Android Studio (version 2024.1)
• Android Studio at /home/user/android-studio
• Flutter plugin version 81.0.2
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• android-studio-dir = /home/user/android-studio
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.93.0)
• VS Code at /usr/share/code
• Flutter extension version 3.96.0
[✓] Connected device (3 available)
• Android SDK built for x86 64 (mobile) • emulator-5554 • android-x64 • Android 6.0 (API 23) (emulator)
• Linux (desktop) • linux • linux-x64 • Ubuntu 24.04.1 LTS 6.8.0-44-generic
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.137
[✓] Network resources
• All expected network resources are available.
```
</details>
| framework,f: material design,package,a: layout,has reproducible steps,P2,p: go_router,team-design,triaged-design,found in release: 3.24,found in release: 3.26 | low | Critical |
2,520,376,657 | go | x/playground: simplify development and testing | The playground development setup requires Docker, and as a result there is relatively little test coverage using `go test` (and therefore very little coverage in our default CI). [CL 612456](https://go.dev/cl/612456) is an example of a small change where a test would have been nice, but was not feasible due to friction and time constraints. Proof: in https://go.dev/cl/549015, I had actually started this fix for a related issue, and stalled because it was a low priority and I felt a test was warranted.
As a de-facto-but-superficial maintainer of the playground, I think this friction is holding back playground fixes and other improvements. From first principles, we should be able run the playground without Docker, and in doing so should be able to write end-to-end tests.
I think this is a good candidate for a friction fixit week. | NeedsInvestigation,Friction | low | Minor |
2,520,397,140 | flutter | [google_maps_flutter_platform_interface] Test entire `CameraUpdate.toJson` results | We currently test only the first element that identifies the type of CameraUpdate. Ideally we would test the entire array returned, instead. | team,p: maps,package,team-ecosystem,P3,triaged-ecosystem | low | Minor |
2,520,477,370 | bitcoin | RPC: Populate a PSBT input with a UTXO not in wallet/mempool/utxo set | ### Please describe the feature you'd like to see added.
If a user wants to make a chain of two transactions and sign the child *without being able to get the parent in their mempool*, I do not believe there is a way for this to be done natively in the PSBT RPCs. As the UTXO is not in the mempool/wallet/utxo set, it cannot be found for signing/finalization, unless injected manually.
Note that this functionality seems to exist in the raw transaction path via the "prevtx" argument for signing, via:
bitcoin-cli signrawtransactionwithwallet <tx_hex> '[{"txid": "<txid_hex>", "vout": 0, "scriptPubKey": "<spk_hex>", "amount": "<amt>"}]'
### Is your feature related to a problem, if so please describe it.
_No response_
### Describe the solution you'd like
One thought is an additional optional argument for `utxoupdatepsbt` to inject a UTXO from (an array of) serialized previous transactions. The PSBT Input's prevout can be found in that list, and the PSBT Input populated with the requisite UTXO.
### Describe any alternatives you've considered
_No response_
### Please leave any additional context
_No response_ | Feature | low | Minor |
2,520,515,328 | TypeScript | Generated d.ts imports package that is not installed on current project | ### 🔎 Search Terms
"d.ts unresolved modules import"
### 🕗 Version & Regression Information
Typescript v5.5.4
### ⏯ Playground Link
_No response_
### 💻 Code
Here is a basic reproduction: https://github.com/baptisteArno/zod-d-ts-issue
1. pnpm install
2. pnpm turbo build --filter=package-three...
3. See error because schema is of type any
### 🙁 Actual behavior
package-two generates with imports from `zod` which package is not installed in package-two.
This means that the type that we get when importing things from package-two are of inferred type `any`
### 🙂 Expected behavior
I would expect the d.ts file to instead imports from installed packages only.
In that very specific case, I would expect it to generate:
```ts
export declare const schema: typeof import("package-one").schema
```
instead of
```ts
export declare const schema: import("zod").ZodObject<
{
id: import("zod").ZodString;
name: import("zod").ZodString;
},
"strip",
import("zod").ZodTypeAny,
{
id: string;
name: string;
},
{
id: string;
name: string;
}
>;
```
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Critical |
2,520,528,581 | vscode | Tab list gets corrupted when a new webview is opened (extension API) | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: Works <= 1.89.1, broken >= 1.90.2
- OS Version: Linux x64 6.1.0-23-amd64
We develop an extension in which an existing tab gets replaced by a webview. We simply store a reference to the existing tab, open a webview and then close the original tab. This has worked fine up to version 1.89.1. On version >= 1.90.2 closing the tab using `vscode.window.tabGroups.close` fails with the exception `Tab close: Invalid tab not found!`. The tab has not been closed in the meantime nor did we get any close notifications for it via `vscode.window.tabGroups.onDidChangeTabs`.
Steps to Reproduce:
1. Create an extension with the following function exposed as command:
```typescript
async dummy(): Promise<void> {
const tab = vscode.window.tabGroups.activeTabGroup.activeTab;
if (!tab) {
console.log("Please open a tab to test this code.");
return;
}
const panel = vscode.window.createWebviewPanel(
"TestPanel",
"Test",
tab.group.viewColumn);
panel.webview.html = "<body>Hello World</body>";
// Introduce a tiny delay
await panel.webview.postMessage("hello");
vscode.window.tabGroups.close(tab);
}
```
2. Open some tab and call the command
3. Check the debug console for the `Tab close: Invalid tab not found!` exception.
| bug,workbench-tabs,confirmed,regression | low | Critical |
2,520,569,212 | pytorch | [Inductor] Device-specific padding settings | ### 🚀 The feature, motivation and pitch
### Problem
In the past, Inductor only used padding on GPU devices. In order to support other devices such as MTIA, recent PRs such as #133939 and #135280 have made padding configurable via Inductor's global `config.py`. While this approach works, it would be cleaner if each device could provide its own padding configuration as opposed to setting 4-5 global configs before running Inductor.
One pain point that illustrates the issue is `config.disable_padding_cpu`. By default, Inductor enables padding only on GPU devices. We disable this behavior when we want to pad on MTIA, essentially allowing padding on all devices, and not just MTIA. We don't currently have a way to tell PyTorch to pad specifically on MTIA, since MTIA's backend is closed source, so PyTorch is not aware of it.
### Proposal
To get around this, I was hoping to make some common interface where each device can provide information to Inductor about what kind of code it should make. I'm aware of two places where this is currently done:
- [Device Interface](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/device_interface.py#L36) - exposes constants such as how many processors the device has, e.g. the number of CPU cores or GPU SMs.
- [Backend Feature](https://github.com/pytorch/pytorch/blob/a2cb9b7331524eb0d9e62b38c57d38d8725cbc1b/torch/_inductor/codegen/common.py#L163) - I'm less familiar with this, but it seems to say whether specific patterns are available for a given backend. E.g., can we do a sort. This is currently implemented as an `Enum`. Note this backend-specific (e.g. Halide vs Triton), not device-specific (e.g. H100 vs A100).
Neither of these seems to be a perfect fit for the padding config. I like that `BackendFeature` is internal to Inductor, whereas `DeviceInterface` is shared with other parts of PyTorch which don't need to know about padding. On the other hand, `BackendFeature` is an `Enum`, so it's not expressive enough to represent padding configs which tend to be integers. Also, we will probably want to configure padding differently per device, as opposed to per backend. (For example, `CudaCombinedScheduling` is used for all generations of NV GPUs, which might want different padding settings.)
Right now, I'm leaning towards adding a new API to `DeviceInterface` to expose device-specific Inductor settings. But I'm open to other options. Is there a better way?
### Alternatives
### Option 1: bury heuristics inside of Inductor
Rather than exposing this to `DeviceInterface`, we could bury these heuristics within Inductor, such as
```
if isinstance(scheduling, MTIATritonScheduling):
padding_config = {
"comprehensive_padding" True,
}
```
This is undesirable for closed source backends like MTIA, since it doesn't make much sense for PyTorch to have code dealing with a backend which it can't import. It's also less scalable since Inductor's codebase would need to handle all possible hardware backends. It would be cleaner if we could expose these options via some common interface that out-of-tree backends can implement themselves.
### Option 2: use existing DeviceProperties
`DeviceInterface` already has access to `DeviceProperties`, which provides similar hardware-specific parameters. The problem is that PyTorch does not seem to control these settings, since they are queried from CUDA. So we can't easily extend them.
[https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__DEVICE.html#group__CUDART__DEVICE_1g1bf9d625a931d657e08db2b4391170f0](https://www.google.com/url?q=https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__DEVICE.html%23group__CUDART__DEVICE_1g1bf9d625a931d657e08db2b4391170f0&sa=D&source=docs&ust=1726083264874585&usg=AOvVaw0D0T4jCiOutrb8yAFqDqqH)
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Major |
2,520,575,445 | godot | Blendshapes aren't cleared from MeshInstance3D when mesh is changed. | ### Tested versions
Reproducible:
- v4.4.dev2.official [97ef3c837]
- v4.3.stable.official [77dcf97d8]
...Likely earlier too.
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4090 (NVIDIA; 31.0.15.3699) - AMD Ryzen 9 7950X 16-Core Processor (32 Threads)
### Issue description
When changing MeshInstance3D mesh without clearing it first, the blendshapes of the previous mesh remain.
Before:
<img width="383" alt="before" src="https://github.com/user-attachments/assets/0d5c19ff-0101-442e-9819-1e458a5114ce">
After changing:
<img width="376" alt="after" src="https://github.com/user-attachments/assets/fa43937f-58bb-46e4-8a52-f1f00dadb5f4">
Thus some errors are spammed:
> scene/3d/mesh_instance_3d.cpp:158 - Index p_blend_shape = 0 is out of bounds ((int)blend_shape_tracks.size() = 0).
**Edit**: As I'm writing this I realized I Forgot to test what happens when mesh is changed to an another with blendshapes will report shortly.
**Edit2:** As expected changing to an another mesh with blendshapes doesn't work either, the blendshapes of the previous mesh remain.
### Steps to reproduce
Create MeshInstance3D, attach a mesh with blendshapes to it, change the mesh to an another (without clearing the mesh property).
### Minimal reproduction project (MRP)
[meshswapissue.zip](https://github.com/user-attachments/files/16968957/meshswapissue.zip)
| bug,topic:3d | low | Critical |
2,520,593,224 | deno | import.meta.resolve() is being wrongly blocked when used by a JSR package | Version: Deno 1.46.3
**Description:**
The current behavior of JSR (JavaScript Registry) prevents the usage of `import.meta.resolve()` when resolving a URL (like HTTP(S)) within a JSR package. However, `import.meta.resolve()` is not used for dynamic imports but simply for resolving the import URL.
JSR correctly disallows direct imports from HTTP(S) for security reasons, but this restriction should not apply to `import.meta.resolve()` since it's only resolving the URL and not executing the import.
**Steps to reproduce:**
1. Add an entry in your import map that resolves to a http endpoint ("resource/xpto.ts")
2. Publish any JSR package that contains the following code:
```typescript
import.meta.resolve("resource/xpto.ts");
```
3. Check the output, and you will receive the following block message:
```
Importing https://example.com/resource/xpto.ts blocked. JSR packages cannot import non-JSR remote modules for security reasons.
```
**Expected Behavior:**
`import.meta.resolve()` should be allowed to resolve HTTP(S) URLs without raising a security block since it is not dynamically importing the URL but merely resolving it.
**Actual Behavior:**
JSR blocks the resolution and shows a security message, even though no actual dynamic import is being performed.
| bug,jsr | low | Minor |
2,520,626,631 | rust | E0529 wrong infinite suggestion to add `as_deref` | ### Code
```Rust
use std::io::ErrorKind;
use std::borrow::Cow;
fn f(x: Result<i32, Vec<ErrorKind>>) -> bool {
matches!(x, Err([ErrorKind::NotFound]))
}
```
### Current output
```Shell
error[E0529]: expected an array or slice, found `Vec<ErrorKind>`
--> src/lib.rs:5:21
|
5 | matches!(x, Err([ErrorKind::NotFound]))
| ^^^^^^^^^^^^^^^^^^^^^ pattern cannot match with input type `Vec<ErrorKind>`
|
help: consider using `as_deref` here
|
5 | matches!(x.as_deref(), Err([ErrorKind::NotFound]))
| +++++++++++
```
### Desired output
no suggestion to use `as_deref`
### Rationale and extra context
1. `as_deref` doesn't change the meaning of the code.
2. When applied, rustc will suggest to apply it again, which is absurd and confusing.
### Other cases
```Rust
matches!(x.as_deref().as_deref().as_deref(), Err([ErrorKind::NotFound]))
```
will suggest
```
help: consider using `as_deref` here
|
5 | matches!(x.as_deref().as_deref().as_deref().as_deref(), Err([ErrorKind::NotFound]))
| +++++++++++
```
### Rust Version
```Shell
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
### Anything else?
_No response_
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-diagnostics,T-compiler | low | Critical |
2,520,629,014 | go | html/template: template Parse/Execute escaping race | Template.Execute and Template.Parse can race, causing template escaping state to become out-of-sync.
Template.Parse allows re-parsing a template, but is documented as not being callable after the first call to Template.Execute. It checks this by seeing if the template has been escaped (by inspecting Template.namespace.escaped). When Template.Execute is first called, it will check if the template is escaped, and escapes it if it was not already. It then sets Template.namespace.escaped to indicate the template is escaped.
Template.namespace.escaped is protected with a lock, but Template.Parse doesn't hold this lock for the duration of its execution, taking the lock to initially check Template.namespace.escaped and then taking it again later to populate the re-parsed templates. If Template.Execute is called between these two steps, it takes the lock, escapes the template and sets Template.namespace.escaped. Template.Parse will then re-take the lock, overwrite the escaped template, but not unset Template.namespace.escaped, causing subsequent calls to Template.Execute to execute an unescaped template.
Since this requires concurrent calls to Parse and Execute, and can only be triggered on the initial call to Execute, this is extremely hard to exploit, but it is cleanly incorrect behavior.
This issue was initially reported to us by Jakob Ackermann. | NeedsDecision | low | Minor |
2,520,630,456 | langchain | None type not checked before adding UsageMetadata value in AIMessageChunk when using LLM streaming | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
client = ChatOpenAI(
api_key=API_KEY,
base_url=PORTKEY_GATEWAY_URL,
streaming=streaming,
default_headers=portkey_headers,
model=api_model_id,
temperature=options.temperature,
n=options.n,
max_tokens=options.maxTokens,
)
messages = [HumanMessage(content='Some question')]
client.stream(messages)
```
### Error Message and Stack Trace (if applicable)
| File "/Users/user/app/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 411, in stream
| raise e
| File "/Users/user/app/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 402, in stream
| generation += chunk
| File "/Users/user/app/.venv/lib/python3.12/site-packages/langchain_core/outputs/chat_generation.py", line 100, in __add__
| message=self.message + other.message,
| ~~~~~~~~~~~~~^~~~~~~~~~~~~~~
| File "/Users/user/app/.venv/lib/python3.12/site-packages/langchain_core/messages/ai.py", line 308, in __add__
| return add_ai_message_chunks(self, other)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/Users/user/app/.venv/lib/python3.12/site-packages/langchain_core/messages/ai.py", line 360, in add_ai_message_chunks
| usage_metadata_["total_tokens"] += other.usage_metadata["total_tokens"]
| TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType'
+------------------------------------
### Description
I'm using the ChatOpenAI class to stream an LLM output or OpenAI compatible API endpoints. In my case when calling an anthropic model (and possibly others) an exception is thrown since other.usage_metadata["total_tokens"] is None.
```python
# Token usage
if left.usage_metadata or any(o.usage_metadata is not None for o in others):
usage_metadata_: UsageMetadata = left.usage_metadata or UsageMetadata(
input_tokens=0, output_tokens=0, total_tokens=0
)
for other in others:
if other.usage_metadata is not None:
usage_metadata_["input_tokens"] += other.usage_metadata["input_tokens"]
usage_metadata_["output_tokens"] += other.usage_metadata[
"output_tokens"
]
usage_metadata_["total_tokens"] += other.usage_metadata["total_tokens"]
usage_metadata: Optional[UsageMetadata] = usage_metadata_
else:
usage_metadata = None
```
I think we should check for None values before attempting to add to the existing UsageMetadata like so:
```python
# Token usage
if left.usage_metadata or any(o.usage_metadata is not None for o in others):
usage_metadata_: UsageMetadata = left.usage_metadata or UsageMetadata(
input_tokens=0, output_tokens=0, total_tokens=0
)
for other in others:
if other.usage_metadata is not None:
if other.usage_metadata.get("input_tokens") is not None:
usage_metadata_["input_tokens"] += other.usage_metadata["input_tokens"]
if other.usage_metadata.get("output_tokens") is not None:
usage_metadata_["output_tokens"] += other.usage_metadata["output_tokens"]
if other.usage_metadata.get("total_tokens") is not None:
usage_metadata_["total_tokens"] += other.usage_metadata["total_tokens"]
usage_metadata: Optional[UsageMetadata] = usage_metadata_
else:
usage_metadata = None
```
### System Info
langchain-openai version: ^0.1.23
Platform: mac
python version: 3.12.0 | 🤖:bug,investigate,Ɑ: core | low | Critical |
2,520,641,329 | flutter | Proposal: Remove engine top-level orchestrator builders that only kick off and collect sub-builds (no generators), and hoist sub-build to "standalone" | Some "top-level" orchestrator builds only kick off sub-builds and then do nothing but wait for the results. [`Mac mac_unopt`](https://github.com/flutter/engine/blob/38e37ce9879b2efe4890a8d09f4207dc5686a1bd/ci/builders/mac_unopt.json) is an example of this:

Note there are no "Global generators" or tests.
GitHub currently only allows reruns on these top-level builds when there's a failure in the subbuild. In the case of `mac_unopt` that means all the sub-builds are re-run, even though most of them succeeded ([example](https://ci.chromium.org/ui/p/flutter/builders/try/Mac%20mac_unopt/8656/overview)). Rerunning all of them ties up CI resources and makes the rerun longer, if the failing test isn't the long-pole of the top-level build total time. It also means the top-level orchestrator build idles bot while it waits on the builds to finish.
[`mac_ios_engine`](https://ci.chromium.org/ui/p/flutter/builders/try/Mac%20mac_ios_engine/34575/overview) is an example of a build that can't be trivially replaced by it subviews (like `ci/ios_debug_sim`) because it has generators that operate on the archives:
https://github.com/flutter/engine/blob/38e37ce9879b2efe4890a8d09f4207dc5686a1bd/ci/builders/mac_ios_engine.json#L488C6-L488C16
This proposes removing all top-level builds that only kick off subbuilds, and hoisting the sub-builds into the .ci.yaml.
- [x] [Linux clangd](https://github.com/flutter/engine/blob/38e37ce9879b2efe4890a8d09f4207dc5686a1bd/ci/builders/linux_unopt_debug_no_rbe.json) https://github.com/flutter/engine/pull/55186
- [x] Linux mac_clangd https://github.com/flutter/engine/pull/56014
- [ ] Linux linux_android_emulator*
- [x] Linux linux_android_emulator_skia_tests https://github.com/flutter/engine/pull/55186
- [x] [Linux linux_android_emulator_skia_tests_34](https://github.com/flutter/engine/blob/38e37ce9879b2efe4890a8d09f4207dc5686a1bd/ci/builders/linux_android_emulator_skia_34.json) https://github.com/flutter/engine/pull/55186
- [ ] [Linux linux_arm_host_engine](https://github.com/flutter/engine/blob/38e37ce9879b2efe4890a8d09f4207dc5686a1bd/ci/builders/linux_arm_host_engine.json#L6)
- [ ] [Linux linux_fuchsia_tests](https://github.com/flutter/engine/blob/38e37ce9879b2efe4890a8d09f4207dc5686a1bd/ci/builders/linux_fuchsia_tests.json)
- [ ] [Linux linux_host_desktop_engine](https://github.com/flutter/engine/blob/38e37ce9879b2efe4890a8d09f4207dc5686a1bd/ci/builders/linux_host_desktop_engine.json#L6)
- [ ] linux_unopt
- [ ] linux_web_engine
- [ ] local_engine
- [ ] mac_android_aot_engine
- [ ] mac_unopt
- [ ] windows_android_aot_engine
- [ ] windows_arm_host_engine
- [ ] windows_arm_host_engine
cc @zanderso @jtmcdole | engine,c: proposal,P1,team-engine,triaged-engine | medium | Critical |
2,520,642,947 | ui | [bug]: shadcn add EACCES: permission denied, scandir | ### Describe the bug
Having a 'data' docker/podman folder at the same level of components,
```
app/
data/
components/ui
tailwind.config.ts
tsconfig.json
components.json
[...]
```
Gives a :
```
EACCES: permission denied, scandir '${SOMEPATH}/next-project/data'
```
Even if i run`shadcn add button -p ./components/ui`
This also fails on a `shadcn init`, having the data folder previously created/used
### Affected component/components
CLI add/init
### How to reproduce
shadcn init
EACCES: permission denied, scandir ...SOMEPATH
shadcn add button -p ./components/ui
✔ Checking registry.
✔ Installing dependencies.
⠋ Updating files.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
EACCES: permission denied, scandir ...SOMEPATH
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Selinux
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,520,643,575 | go | build: ios trybot builders failing | Trybot builders for ios are failing consistently on https://build.golang.org/.
Example log from https://build.golang.org/log/38129923c6f886f82063cab77355caee3747497a :
```
ios-arm64-corellium at 1dfb33e8612d20f41cf4e034d9d0838abf75e04b
...
Building Go cmd/dist using /var/root/go-ios-arm64-bootstrap. (go1.20.6 ios/arm64)
found packages main (build.go) and building_Go_requires_Go_1_22_6_or_later (notgo122.go) in /tmp/workdir-host-ios-arm64-corellium-ios/go/src/cmd/dist
build failed: make script failed: exit status 1
```
This looks related to #64751 . Assigning to @dmitshur | Builders,NeedsInvestigation,OS-iOS | low | Critical |
2,520,644,735 | godot | Can't disable layers if layer 1 is active when doing reimport of gltf/glb | ### Tested versions
Godot v4.4.dev2
### System information
Fedora Linux 40 (KDE Plasma) - Wayland - Vulkan (Forward+)
### Issue description
If I try to disable all layers except layer 1 when doing reimport it doesn't disable them.
[Screencast_20240911_213756.webm](https://github.com/user-attachments/assets/fe01ab27-8827-4f56-adb6-25895f2b1022)
In the video you can see that I can disable layer 3 if I keep layer 1 and 2 active. But then I can't disable layer 2 as long as layer 1 is active.
### Steps to reproduce
Double click on glb file
Select MeshInstance3D and enable more layers than layer 1, do a reimport
Reopen the dialog and disable all layer but number 1, and click reimport
Reopen and see that all layers are still active
### Minimal reproduction project (MRP)
[layer.zip](https://github.com/user-attachments/files/16969806/layer.zip)
| needs testing,topic:import | low | Minor |
2,520,700,777 | PowerToys | Fancy zones assignment based on hardware ID | ### Description of the new feature / enhancement
I have three monitors, two of which are the same. For the latter, I have two separate fancy-zone profiles, but they keep swapping around (i.e. the profile on the left monitor (L) will be set to the right monitor (R) and vice versa).
Would it be possible to implement a setting for fancy zones, such that you can assign custom profiles to specific monitors (based on some hardware ID for example).
### Scenario when this would be used?
When you set a custom profile to a certain monitor those settings should be stored. Then, at startup, it should be verified that, the custom profiles set to a monitor by the user, are actually set correctly.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,520,702,758 | flutter | Uploaded engine iOS scenario test Xcode xcresults unzip with full absolute paths | The iOS scenario test uploads Xcode xcresults on test failures. As of https://github.com/flutter/engine/pull/53717 and https://github.com/flutter/engine/pull/55093 the unzipped xcresult directory structure includes the full absolute path:
```
/path/to/unzipped/Volumes/Work/s/w/ir/cache/builder/src/out/ci/ios_debug_unopt_sim/scenario_app/Scenarios/ios_scenario_xcresult2Jzijm/ios_scenario.xcresult
```
See
https://github.com/flutter/engine/pull/41647/files#diff-5633b6caaf35e2384ccc2ad55a507ac26a5f7eb05a49a66fb36acd51f9c04ba8R54-R55
| a: tests,c: regression,engine,P2,fyi-infra,team-ios,triaged-ios | low | Critical |
2,520,709,647 | godot | [3.6] Project settings Audio category and subcategory incorrectly displaying / interacting. | ### Tested versions
Reproducible: Godot 3.6 Stable, 3.6 beta 5
Not reproducible: Godot 3.5.3
### System information
Linux Mint 21.3
### Issue description
Project Settings -> Audio category collapsible acts as button with settings, General subsection below not found in Godot 3.5.3, those are 3.6 only features, like text to speech option.

Settings remain correct and reachable, but found out because search bar navigates correctly to the "hidden" section.
### Steps to reproduce
Any project should be able to access Project settings -> Audio
### Minimal reproduction project (MRP)
Any project should be able to access Project settings -> Audio | bug,topic:editor,usability | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.