id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,489,912,860 | vscode | Diff view does not remember the scroll position | Testing #226665
Repro steps:
1. Open diff view with changes to many cells
2. Scroll down to the bottom most cell
3. Open a different diff view with another notebook
4. Come back to the first one | bug,polish,notebook-diff,multi-diff-editor | low | Minor |
2,489,914,840 | vscode | SCM Graph - What does historyItemGroupHoverLabelForeground do? | Testing #226648
Based on this screenshot after running the generated theme from current settings command, I would have expected something in the hover to have a white foreground color?
<img width="1226" alt="Screenshot 2024-08-27 at 9 53 52 AM" src="https://github.com/user-attachments/assets/a4562a28-1ee1-4a99-92dc-fd3a2c0c2481">
| polish,scm | low | Minor |
2,489,917,394 | vscode | Multi File Diff Editor: Hover Messages are cut off | Testing #226665
Testing on latest VS Code Insiders on Windows 11 Insiders
```
Version: 1.93.0-insider (user setup)
Commit: ff7a154d5e5e9034914f0466420f0f1407f0c95e
Date: 2024-08-27T05:04:20.235Z
Electron: 30.4.0
ElectronBuildId: 10073054
Chromium: 124.0.6367.243
Node.js: 20.15.1
V8: 12.4.254.20-electron.0
OS: Windows_NT x64 10.0.25987
```
I was surprised I could add a breakpoint. Additionally, the hover gets cut off:

| bug,multi-diff-editor | low | Minor |
2,489,934,531 | vscode | SCM Graph - Should selecting an item in the scm graph highlight the lines similar to indent guides in the editor? | Testing #226648
Current:
<img width="483" alt="Screenshot 2024-08-27 at 10 09 04 AM" src="https://github.com/user-attachments/assets/00610c55-97b8-46a2-87fb-c3bafa9bdfa0">
Basic idea:

This will allow easier tracking of the line(s) when scrolling | ux,scm,under-discussion | low | Minor |
2,489,934,810 | vscode | Avoid inlining large types | Testing #226669
This is just a personal pet peeve of mine- when you have a property like `folderOptions: { /* lots of properties */ }[]` then if I have a function where I want to refer to the type of a single `folderOptions` item, I don't have an interface that I can use as the type of that item. Like `function searchOneFolder(folderOptions: ???) { }` It's much nicer to give a name to that kind of thing and write something like `folderOptions: TextSearchFolderOptions[]` | bug,search,search-api | low | Minor |
2,489,958,105 | vscode | SCM Graph - Should merging main use the same color? | Testing #226648
Related: https://github.com/microsoft/vscode/issues/226821
Should the yellow here be blue since it represents the same thing at the vertical line until it's merged?
<img width="262" alt="Screenshot 2024-08-27 at 10 20 31 AM" src="https://github.com/user-attachments/assets/fdd7ad5c-3e36-44b7-ae24-f70f275d6489">
Proposal:
<img width="163" alt="Screenshot 2024-08-27 at 10 22 15 AM" src="https://github.com/user-attachments/assets/30f195ad-840d-4a1d-b182-30267a011c9a">
| scm,under-discussion | low | Minor |
2,489,966,587 | vscode | SCM Graph - Support collapsing commits belonging to merge commits | Testing #226648
I haven't used this sort of graph much, but it seems like collapsing these types of commits would be useful. Maybe even by default?
Current:
<img width="569" alt="Screenshot 2024-08-27 at 10 25 09 AM" src="https://github.com/user-attachments/assets/9be94889-de55-4c07-a53c-2a16451eca6d">
Proposal:

Maybe with a > or + to expand them inline? | feature-request,scm | low | Minor |
2,489,975,539 | vscode | `FindFiles2OptionsNew.useIgnoreFiles` is quite complicated | Testing #226670
I'm not quite sure how to express this, but the `useIgnoreFiles` option is quite complex and I'm not sure if it's necessary to be so complex. If I understand its purpose correctly, the `findFiles2New` API will by default do some filtering on top of the disk. In other words, it will respect whatever settings are defined in the workspace or defined by the user w.r.t. ignoring files.
Then, `useIgnoreFiles` is a mechanism to ask the vscode API to ignore the settings and be 100% transparent and just return whatever the disk has.
My question is: how could an extension author know if they should set `useIgnoreFiles.parent` to false? How about `useIgnoreFiles.global`? These two settings seem to be extremely personal to the user that has decided to check out the repository in a folder where they define an ignore file. Or maybe they have a global ignore file. But how could an extension author know what kind of setup they're dealing with?
So I'd suggest simplifying the API, allow extension authors to either opt for the "on disk" no-filters file search or for the "user configured" filter. Also, I think a better name for such a simplified setting might be `respectUserDefinedFilters`, `respectConfigurationFilters` or `disableFilters`, `disableConfigurationFilters`...
**Edit**: Looking at `useExcludeSettings`, the same situation appears. How should I, as an extension author, know which value to use `ExcludeSettingOptions.FilesExclude` vs `ExcludeSettingOptions.SearchAndFilesExclude` and why isn't there a `ExcludeSettingOptions.SearchExclude`.
I feel that the API is quite complex and I'm not sure if such complexity is required. Maybe you have use-cases in mind for all these settings? | api,under-discussion,search-api | low | Minor |
2,489,980,896 | material-ui | [material-ui] defaultShouldForwardProp is not a function error in plasmo extension | ### Steps to reproduce
Steps:
1. create a browser extension with plasmo `npm create plasmo --with-src`
2. add mui according to the docs `npm install @mui/material @emotion/react @emotion/styled`
3. add `Button` or `Typography` to `src/popup.tsx`
4. run `npm run dev`
5. open the extension and check console:
```
Uncaught TypeError: defaultShouldForwardProp is not a function
at createStyled (emotion-styled-base.esm.js:154:3)
at styled (index.js:15:16)
at styled (createStyled.js:141:26)
at 7un50.react (Typography.js:40:9)
at newRequire (popup.7d3dc21e.js:72:24)
at localRequire (popup.7d3dc21e.js:85:35)
at hctgw../Typography.js (index.js:3:1)
at newRequire (popup.7d3dc21e.js:72:24)
at localRequire (popup.7d3dc21e.js:85:35)
at 2wxYu../colors/index.js (index.js:268:1)
```
### Current behavior
App doesn't render, blank screen is presented
### Expected behavior
Components being rendered
### Context
Render mui components in my browser extension
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Browser:
Google Chrome
System:
OS: macOS 14.5
Binaries:
Node: 22.5.1 - /opt/homebrew/bin/node
npm: 10.8.2 - /opt/homebrew/bin/npm
pnpm: 9.6.0 - /opt/homebrew/bin/pnpm
Browsers:
Chrome: 123.0.6312.59
Edge: Not Found
Safari: 17.5
npmPackages:
@emotion/react: ^11.13.3 => 11.13.3
@emotion/styled: ^11.13.0 => 11.13.0
@mui/core-downloads-tracker: 6.0.0
@mui/icons-material: ^6.0.0 => 6.0.0
@mui/material: ^6.0.0 => 6.0.0
@mui/private-theming: 6.0.0
@mui/styled-engine: 6.0.0
@mui/system: 6.0.0
@mui/types: 7.2.16
@mui/utils: 6.0.0
@types/react: 18.2.48 => 18.2.48
react: 18.2.0 => 18.2.0
react-dom: 18.2.0 => 18.2.0
typescript: 5.3.3 => 5.3.3
```
</details>
**Search keywords**: defaultShouldForwardProp is not a function error | external dependency,package: material-ui | low | Critical |
2,489,992,063 | go | go/format: possible internal error in format.Node when source contains //go:build directive | Reproducer:
```go
func TestBuildDirectiveFormat(t *testing.T) {
const src = "package A\nimport()\nfunc A(){0//go:build\n0}"
fs := token.NewFileSet()
f, err := parser.ParseFile(fs, "test.go", src, parser.ParseComments|parser.SkipObjectResolution)
if err != nil {
t.Fatal(err)
}
if err := printer.Fprint(io.Discard, fs, f); err != nil {
t.Fatal(err) // no error
}
if err := format.Node(io.Discard, fs, f); err != nil {
t.Fatal(err) // format.Node internal error (8:5: expected ';', found 0 (and 1 more errors))
}
}
```
`printer.Fprint` does not return an error for the same source, only the `format.Node` errors. It happens here:
https://github.com/golang/go/blob/994d1d446663873dd593846a0b94147410e5922a/src/go/format/format.go#L76-L80
CC @griesemer | NeedsInvestigation | low | Critical |
2,489,997,553 | flutter | Devtools release blocked because Flutter reports 0.0.0 version | (internal link) https://ci.chromium.org/ui/p/dart-internal/builders/flutter/devtools/975/infra
May be related to: https://github.com/flutter/flutter/issues/142521 | team-infra,P1,triaged-infra,fyi-tool | medium | Minor |
2,489,997,762 | vscode | SCM Graph - add support for async filtering | Testing #226648
If I open the scm graph and filter it, it ends up not showing anything for "terminal":
<img width="557" alt="Screenshot 2024-08-27 at 10 43 57 AM" src="https://github.com/user-attachments/assets/f1e5ba26-4047-4366-a906-98b1e2feee27">
I can remove the filter and scroll down a bunch to load it in which gives me some results, could we load in items when filtering if the viewport isn't full? | feature-request,scm | low | Minor |
2,490,024,213 | flutter | API proposal: `WidgetStatePropertyOr<T>` | **TL;DR** at the bottom 🙂
<br>
# Problem
The current `WidgetStateProperty` API is fantastic—there's a [Decoding Flutter video](https://youtu.be/CylXr3AF3uU) with a great explanation of why—but it's also messy.
<br>
For every parameter that could potentially be a WidgetStateProperty, there are 3 possible types, as noted in [a recent Design Doc](https://flutter.dev/go/using-widget-state-property). Using the `Color` class as an example:
1. Some APIs only accept `Color`s
2. Some APIs only accept `WidgetStateProperty<Color>` objects
3. Some APIs can handle either a `Color` or a `WidgetStateProperty<Color>` being passed
<br>
Number 3 is possible thanks to an additional class declaration:
```dart
class WidgetStateColor extends Color implements WidgetStateProperty<Color> {
// ...
}
```
Unfortunately, there's a long list of problems with classes like this one.
<br>
## Inaccurate documentation
#### copy/paste typos
- https://github.com/flutter/flutter/pull/151935
We needed the PR shown above because repeatedly copy/pasting near-identical logic & documentation is a slog: it's boring to write and boring to review, so mistakes are more likely to slip through.
> [!NOTE]
> At the time of writing, https://github.com/flutter/flutter/pull/151935 has landed in master but not stable, so the [api.flutter.dev docs](https://api.flutter.dev/flutter/widgets/WidgetStateTextStyle-class.html) are still showing inaccurate documentation.
#### `const` constructors
```dart
/// To define a `const` [WidgetStateTextStyle], you'll need to extend
/// [WidgetStateTextStyle] and override its [resolve] method.
```
Technically, that was never true, since global functions and static methods can be referenced in a constant context:
```dart
class MyWidget extends StatelessWidget {
const MyWidget({super.key});
static TextStyle _favoriteStyle(Set<WidgetState> states) {
if (states.contains(WidgetState.selected)) {
return const TextStyle(color: Color(0xFF00FFFF));
}
return TextStyle(
color: states.contains(WidgetState.disabled) ? Colors.black26 : Colors.black,
);
}
@override
Widget build(BuildContext context) {
// defining a const WidgetStateTextStyle without extending the class!
const widgetStateTextStyle = WidgetStateTextStyle.resolveWith(_favoriteStyle);
// ...
}
}
```
And on top of that, recently I was able to land a PR that adds another `const factory` constructor to WidgetStateTextStyle:
```dart
const widgetStateTextStyle = WidgetStateTextStyle.fromMap({
WidgetState.focused: TextStyle(color: Colors.blue, fontWeight: FontWeight.bold),
WidgetState.disabled: TextStyle(color: Colors.grey),
WidgetState.any: TextStyle(color: Colors.black),
});
```
But even after the PR had been in review for over 2 months, neither I nor the reviewers noticed that the docs needed to be changed, since "why would anyone want to read through all of that?"
<br>
One more question about `const` constructors: why does `WidgetStateTextStyle` have a `const factory`, but `WidgetStateColor` doesn't?
```dart
/// If used as a regular color, the color resolved in the default state (the
/// empty set of states) will be used.
factory WidgetStateColor.resolveWith(WidgetPropertyResolver<Color> callback) = _WidgetStateColor;
```
```dart
/// If used as a regular text style, the style resolved in the default state (the
/// empty set of states) will be used.
const factory WidgetStateTextStyle.resolveWith(WidgetPropertyResolver<TextStyle> callback) = _WidgetStateTextStyle;
```
Answer: the [`WidgetStateTextStyle.resolveWith()`](https://main-api.flutter.dev/flutter/widgets/WidgetStateTextStyle/WidgetStateTextStyle.resolveWith.html) documentation is wrong.
```dart
final Color color = WidgetStateColor.resolveWith((_) => Colors.red);
print(color.value == Colors.red.value); // true
const emptyStyle = TextStyle();
const redText = TextStyle(color: Colors.red);
final TextStyle style = WidgetStateTextStyle.resolveWith((_) => redText);
print(style.color == redText.color); // false
print(style.color == emptyStyle.color); // true
```
<br>
## Unsafe types
The `num` class creates a beautiful type hierarchy.
<p align="center">
<img
src="https://github.com/user-attachments/assets/e6940aab-3478-40cf-9d7f-bb93e5cc117c"
alt="sealed num"
width="50%"
/>
</p>
```dart
sealed class num {}
class int implements num {}
class double implements num {}
```
<br>
Imagine if this hierarchy were flipped upside down:
<p align="center">
<img
src="https://github.com/user-attachments/assets/49a2357a-3656-4ed0-af05-50a7fc0dff3f"
alt="upside down hierarchy"
width="50%"
/>
</p>
```dart
class int {}
class double {}
/// The [num] class allows a [double] to be used anywhere that accepts an [int],
/// but this should only be done with classes that explicitly support
/// both integer and non-integer values.
class num extends int implements double {}
class MyClass {
const MyClass(this.decimal, this.integer, this.either);
/// A floating-point decimal.
final double decimal;
/// You could pass a [num] here, but it might cause an error!
final int integer;
/// This can be either an integer or a decimal.
/// Passing an [int] works fine, but for a [double],
/// you have to type out `num(3.14)` in order to avoid a syntax error.
final int either;
}
```
<br>
Replace `int`, `double`, and `num` with `Color`, `WidgetStateProperty<Color>` and `WidgetStateColor` respectively, and that's what we're currently working with.
From the `WidgetStateColor` docs:
```dart
/// [WidgetStateColor] should only be used with widgets that document
/// their support, like [TimePickerThemeData.dayPeriodColor].
```
<br>
It'd be great if, instead of `WidgetStateColor` / `WidgetStateTextStyle` / `WidgetStateMouseCursor` / `WidgetStateBorderSide`, we had 1 generic type to cover all the bases.
It'd also be great to do something about the type hierarchy:
<p align="center">
<img
src="https://github.com/user-attachments/assets/4d864e2a-3e78-4061-b6ca-429a11c0780f"
alt="WidgetStateColor hierarchy"
width="75%"
/>
</p>
<br>
# Solution
```dart
typedef WidgetStatePropertyOr<T> = T | WidgetStateProperty<T>;
```
Much like how `num` is a 100% straightforward way to accept either an `int` or a `double`, the `WidgetStatePropertyOr` type would allow complete flexibility between e.g. `Color` and `WidgetStateProperty<Color>`.
Currently, we have 2 ways to create a `ButtonStyle`:
```dart
const ButtonStyle({
WidgetStateProperty<Color?>? backgroundColor,
WidgetStateProperty<Color?>? foregroundColor,
});
static ButtonStyle styleFrom({
Color? backgroundColor,
Color? foregroundColor,
}) {
// ...
}
```
But `Property<T>` makes it easy to consolidate:
```dart
const ButtonStyle({
WidgetStatePropertyOr<Color?>? backgroundColor,
WidgetStatePropertyOr<Color?>? foregroundColor,
})
```
This has 2 huge benefits:
1. Along with the several flaws listed above, `WidgetStateColor` implements the `WidgetStateProperty<Color>` interface, so it doesn't have the added flexibility of a nullable `WidgetStateProperty<Color?>`. Instead of defining a new `WidgetStateNullableColor` type, we can just add a `?` to the existing type argument.
2. This creates a meaningful distinction between `T` and `WidgetStatePropertyAll<T>`:
a. `Colors.blue`: make it blue, but default to gray if disabled
b. `const WidgetStatePropertyAll(Colors.blue)`: make it blue no matter what
<br>
### Caveat
The ability to specify a union type `WidgetStatePropertyOr<T> = T | WidgetStateProperty<T>` is amazing, since it implicitly applies to every class declaration.
Or, another great option would be an "implicit" constructor:
```dart
sealed class WidgetStatePropertyOr<T> {
implicit const factory Property.fromValue(T value) = ValueProperty<T>;
T resolve(Set<WidgetState> states);
}
class ValueProperty<T> implements WidgetStatePropertyOr<T> {
const ValueProperty(this.value);
final T value;
@override
T resolve(Set<WidgetState> states) => value;
}
abstract class WidgetStateProperty<T> implements WidgetStatePropertyOr<T> {
// no changes necessary for existing WidgetStateProperty API
}
```
But as of yet, neither is supported by Dart syntax.
- https://github.com/dart-lang/language/issues/83
- https://github.com/dart-lang/language/issues/108
If somebody is interested in working with the Dart team to implement one of these features, that'd be awesome. ~~Or if someone on the Dart team opens up a Patreon~~ never mind about that idea… maybe we can try to get 500 `👍` reactions on those dart-lang issues!
<br><br>
**TL;DR**: the goal here is to deprecate & remove `WidgetStateColor`, `WidgetStateBorderSide`, `WidgetStateTextStyle`, etc. in favor of a more useful generic type. The ideal solution would be something like
```dart
typedef WidgetStatePropertyOr<T> = T | WidgetStateProperty<T>;
```
…but it requires [a language feature](https://github.com/dart-lang/language/issues/83) that we don't currently have. | dependency: dart,a: annoyance,P2,c: tech-debt,team-framework,triaged-framework,dependency:dart-triaged | low | Critical |
2,490,037,104 | pytorch | DISABLED test_fused_sdp_choice_privateuseone (__main__.TestSDPAPrivateUse1Only) | Platforms: asan, dynamo, linux, rocm, win, windows, mac, macos
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang | triaged,skipped | high | Critical |
2,490,056,181 | TypeScript | Bloomberg TS 5.6 Beta Feedback | ### Acknowledgement
- [X] I acknowledge that issues using this template may be closed without further explanation at the maintainer's discretion.
### Comment
We are in the process of evaluating the impact of TS 5.6 beta on Bloomberg code. Below are preliminary findings.
Overall, the only new diagnostics produced by this release are the excellent improvements to nullish/truthy checks.
Change | Impacts | Release notes | Packages impacted
-- | -- | -- | --
New nullish/truthy diagnostics | Type checker | [Disallowed Nullish and Truthy Checks](https://devblogs.microsoft.com/typescript/announcing-typescript-5-6-beta/#disallowed-nullish-and-truthy-checks) | ~one dozen
### Stricter type-checking of partial objects
We observed new errors related to nullish/truthy checks, all of which look like correct diagnostics. A few simplified examples are below:
```ts
const err = "bar";
// Right operand of ?? is unreachable because the left operand is never nullish.(2869)
const errorMessage = "foo " + err ?? "Unknown error";
```
```ts
// Right operand of ?? is unreachable because the left operand is never nullish.(2869)
const isActive = !someCondition ?? true;
```
```ts
// This kind of expression is always truthy.(2872)
if (myCheck === "foo" || "bar") {
}
``` | Discussion | low | Critical |
2,490,071,417 | node | add `path.mimeType()` and `path.charset()` to `node:path` | ### What is the problem this feature will solve?
Adding `path.mimeType()` and `path.charset` will solve the dependency on an external library, mime-types, to get the content type of a file based on its extension. This can be done with Node.js itself, just as Bun.js does with [`Bun.file(file).type`](https://bun.sh/docs/api/file-io#reading-files-bun-file).
### What is the feature you are proposing to solve the problem?
I am proposing to add two new methods to the path module:
- `mimeType` - returns the MIME type of a file based on its extension.
- `charset` - returns the charset of a MIME type.
```js
path.mimeType('file.js'); // returns 'application/javascript'
// or:
path.mimeType(path.extname('file.js')); // returns 'application/javascript'
```
And:
```js
path.charset('file.js'); // returns 'utf-8'
// or:
path.charset(path.mimeType('file.js')); // returns 'utf-8'
```
### What alternatives have you considered?
Alternatively, instead of being in `node:path`, it could be in `fs.stat`, but I think that in the end it can be extracted from the path without having to parse the file, besides it is better to use functions for this and the work is done only if they are executed individually.
CC: @vdeturckheim | path,feature request | low | Minor |
2,490,096,293 | vscode | `vscode.ExcludeSettingOptions.None` does not seem to work |
Test extension:
<details>
```
import * as vscode from 'vscode';
import * as fs from 'fs/promises';
import * as path from 'path';
import * as assert from 'assert';
export function activate(context: vscode.ExtensionContext) {
const disposable = vscode.commands.registerCommand('extension.helloWorld', async () => {
const wf = vscode.workspace.workspaceFolders![0].uri;
await vscode.workspace.getConfiguration('files').update('exclude', {
'**/*.fileexclude.*': true,
'**/fileexclude/**': true,
}, vscode.ConfigurationTarget.Workspace);
await vscode.workspace.getConfiguration('search').update('exclude', {
'**/*.searchexclude.*': true,
'**/searchexclude/**': true,
}, vscode.ConfigurationTarget.Workspace);
makeAllFiles([
{ name: '.gitignore', content: '**/bar.txt' },
'foo.txt',
'bar.txt',
'fileexclude/foo.txt',
'fileexclude/bar.txt',
'searchexclude/foo.txt',
'searchexclude/bar.txt',
'nested1/foo.txt',
'nested1/bar.txt',
'nested2/foo.txt',
'nested2/bar.txt',
]);
let output = '';
output += assertUris(
['foo.txt', 'fileexclude/foo.txt', 'nested1/foo.txt', 'nested2/foo.txt', 'searchexclude/foo.txt'],
await vscode.workspace.findFiles2New([new vscode.RelativePattern(wf, '**/*.txt')], { useExcludeSettings: vscode.ExcludeSettingOptions.None })
);
const doc = await vscode.workspace.openTextDocument({
content: output,
language: 'plaintext',
});
await vscode.window.showTextDocument(doc);
});
function assertUris(expected: string[], uris: vscode.Uri[]) {
try {
const actual = uris.map(u => path.relative(vscode.workspace.workspaceFolders![0].uri.fsPath, u.fsPath));
assert.deepStrictEqual(actual.sort(), expected.sort());
} catch (e) {
return String(e) + '\n\n';
}
return `Case OK (${JSON.stringify(expected)})\n`;
}
async function makeAllFiles(files: (string | {name: string, content: string})[]) {
const wf = vscode.workspace.workspaceFolders![0].uri.fsPath;
for (const file of files) {
const name = typeof file === 'string' ? file : file.name;
const content = typeof file === 'string' ? '' : file.content;
const target = path.join(wf, name);
await fs.mkdir(path.dirname(target), { recursive: true });
await fs.writeFile(target, content);
}
}
context.subscriptions.push(disposable);
}
```
</details>
Fails with:
```
AssertionError [ERR_ASSERTION]: Expected values to be strictly deep-equal:
+ actual - expected
[
+ 'foo.txt',
+ 'nested1/foo.txt',
+ 'nested2/foo.txt'
- 'fileexclude/foo.txt',
- 'foo.txt',
- 'nested1/foo.txt',
- 'nested2/foo.txt',
- 'searchexclude/foo.txt'
]
```
I expected `{ useExcludeSettings: vscode.ExcludeSettingOptions.FilesExclude }` would not use my search exclude setting (only `SearchAndFilesExclude` would.)
| bug,search,search-api | low | Critical |
2,490,105,242 | godot | Continously printing text to the console while paused | ### Tested versions
- Reproducible in: 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1050 with Max-Q Design (NVIDIA; 31.0.15.4680) - Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz (8 Threads)
### Issue description
Godot will continue printing text to the console when the scene tree is paused.
The text should stop being printed to the console while the tree is paused.
It can even continue printing while the game window has been closed. The only way to fully close the game, is to press the stop button in the editor.
### Steps to reproduce
1). Create a node and have it call the print method under the process function.
2). Create a node that pauses the game based on user control, make sure to set its process mode to always.
3). Run the scene
4). Wait until the output text counter has reached around 10,000 digits.
I'm unsure how to make it consistantly reproduce it so that it continues printing and keep the game from fully closing.
### Minimal reproduction project (MRP)
Open the world scene, run the project and then follow the instructions written on the label.
[PrintingBugReproduction.zip](https://github.com/user-attachments/files/16767504/PrintingBugReproduction.zip)
| discussion,topic:editor | low | Critical |
2,490,105,542 | material-ui | [Switch] Not visible in high contrast mode | ### Steps to reproduce
Link to live example: (required)
Every single switch on https://mui.com/material-ui/react-switch/
Steps:
1. Visit the link above in a chrome browser
2. In chrome dev tools, open the "rendering" tab in the console drawer and scroll down to `Emulate CSS media feature forced-colors`. Set it to `forced-colors: active`
3. Note that the switches are not visible in any demo on the docs page.

### Current behavior
Switches are not visible, although the pointer cursor does show when you hover on them
### Expected behavior
Switches should be visible when forced colors mode is turned on
### Context
Trying to fix some accessibility issues in high contrast mode and although I might be able to override the styles on my own, it would be better for this to work out of the box.
### Your environment
Please note that although I'm using v5 right now, this issue is reproducible on mui.com using the latest version option available.
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: Linux 6.7 Debian GNU/Linux 12 (bookworm) 12 (bookworm)
Binaries:
Node: 20.16.0 - /usr/local/bin/node
npm: 10.8.1 - /usr/local/bin/npm
pnpm: Not Found
Browsers:
Chrome: 128.0.6613.84
npmPackages:
@emotion/react: ^11.13.0 => 11.13.0
@emotion/styled: ^11.13.0 => 11.13.0
@mui/base: 5.0.0-beta.31
@mui/core-downloads-tracker: 5.15.4
@mui/icons-material: ^5.15.4 => 5.15.4
@mui/material: ^5.15.4 => 5.15.4
@mui/private-theming: 5.15.4
@mui/styled-engine: 5.15.4
@mui/styled-engine-sc: ^5.11.11 => 5.11.11
@mui/system: 5.15.4
@mui/types: 7.2.13
@mui/utils: 5.15.4
@mui/x-data-grid: 6.18.7
@mui/x-data-grid-pro: ^6.18.7 => 6.18.7
@mui/x-date-pickers: 6.19.0
@mui/x-date-pickers-pro: ^6.19.0 => 6.19.0
@mui/x-license-pro: ^6.10.2 => 6.10.2
@types/react: ^18.3.1 => 18.3.3
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
styled-components: ^6.1.12 => 6.1.12
typescript: ^5.5.3 => 5.5.3
```
</details>
**Search keywords**: forced-color, forced color | accessibility,component: switch,package: material-ui,design | low | Major |
2,490,109,509 | ant-design | [App] 组件,无法添加一些html元素自带的属性 | ### What problem does this feature solve?
举例,将 `component` 设置为 `html`,若是想要增加 `dataset` 属性或 `lang` 属性,无效果,而且ts报错提示。
对于 `data-xxx` 虽然可以设置,没有错误,但是是无效果的。
> 不能将类型“{ children: Element; component: string; lang: true; }”分配给类型“IntrinsicAttributes & AppProps<AnyObject>”。
类型“IntrinsicAttributes & AppProps<AnyObject>”上不存在属性“lang”。ts(2322)
## 就像下面的代码似的
```
<App component='html' lang='en' data-xxx='abc'>
....
</App>
```
### What does the proposed API look like?
就是说 App 应该包容性更大一些,不然一些元素自有的属性无法设置
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Major |
2,490,127,998 | vscode | References to "Default search behavior" are confusing | Testing #226669
`TextSearchQueryNew` has some notes like this
```
* If explicitly contains a newline character (`\n`), the default search behavior
* will automatically enable {@link isMultiline}.
```
If I'm implementing a search provider, this is a bit confusing. I might think it means that `isMultiline` will be enabled for me in this case, but that's not true, I think it's just describing how the builtin ripgrep search provider works.
Also, since there's no other way to trigger a multiline search in the vscode UI, should that behavior be moved into vscode?
```
* If using the default search provider, this will be interpreted case-insensitively
* if {@link isCaseSensitive} is `false` or not set.
```
This is just explaining what `isCaseSensitive` means, that seems unnecessary.
```
* If using the default search provider, this can be affected by the `search.smartCase` setting.
* See the setting description for more information.
```
Am I expected to read that setting? If we're going to keep it, it seems like it should be part of the query with the other options. But honestly, I think we could delete that setting...
```
* If enabled, the default search provider will check for boundary characters
* (regex pattern `\b`) surrounding the {@link pattern} to see whether something
* is a word match.
```
Similar here, saying "the default search provider" makes it sound like it's some special behavior, but it's just describing what the flag means. | bug,search,search-api | low | Minor |
2,490,136,238 | svelte | Localization support via the svelte compiler. | ### Describe the problem
I've been thinking of ways to simplify the localization part of our website.
And I think svelte is in a particular great position to make localization a lot simpler for devs to implement.
### Describe the proposed solution
My idea is as follows:
Let's say the Svelte compiler can scan your files for instances where text needs to be translated.
My example would be a rune called: ```$t```.
Example 1:
```ts
<div>{$t`Hello world!`}</div>
```
Example 2:
```ts
const pageTitle = $t`Hello world`
```
Svelte would be able to auto-generate key-value-based language files.
Based on all the instances it saw text that needed to be translated.
The benefits would be huge:
- It could do some analysis to figure out when and where to load the translation files to get the best performance. (Maybe vite can help here.)
- This would keep the original text inline instead of having your text in different locations.
- We could feed the auto-generated language files to some AI translation tool for automatic site translations.
What do you guys think of this idea?
### Importance
would make my life easier | feature request | low | Major |
2,490,151,337 | storybook | [Documentation]: Link to MDX-Embed in the showcase is broken and taken over by spammy site | ### Describe the problem
The domain for the MDX-Embed storybook showcase must have expired. It should be fixed considering its at the top of the showcase and the domain is now very spammy.
Reproduce by clicking the link in the storybook showcase: https://storybook.js.org/showcase/mdx-embed
The correct storybook is now hosted here: https://mdx-embed.netlify.app/?path=/docs/introduction--page
The page I was taken to looks like this:
<img width="1473" alt="Screenshot 2024-08-27 at 12 16 26 PM" src="https://github.com/user-attachments/assets/791447d1-d169-40c1-a867-c9093ab88603">
### Additional context
_No response_ | documentation,needs triage | low | Critical |
2,490,160,636 | godot | Scene instances with exported array would access same array ONLY if array was modified in editor beforehand | ### Tested versions
Reproduced in 4.0 stable, 4.3 Stable and 4.4 dev1
### System information
Godot v4.3.stable (77dcf97d8) - Freedesktop SDK 23.08 (Flatpak runtime) - X11 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3070 Ti (nvidia) - AMD Ryzen 5 5600X 6-Core Processor (12 Threads)
### Issue description
When instance a scene multiple time, and the instanced scene has an exported array,
Modify said array in the scene would cause the every instance of the scene share the same array;
It doesn't happen if the exported array was never modified, or was modified again after instanced in editor.
Edit: Just realized I wasn't clear enough, I'm talking about modified IN INSPECTOR, the part it shows export var.
### Steps to reproduce
1. Make a scene with exported array.
2. Append something into the array, then print it.
3. Instance the scene multiple time into another scene, then run the big scene, check if it prints the same thing for every instances.
4. If so, go back to the scene, add an element to the array in inspector.
5. Run the big scene again, the array now gets bigger as every instance append element into the same array.
6. Further more, modify array on one of the instance in the big scene, will make it isolated and not having the problem.
### Minimal reproduction project (MRP)
[ExportedArrayAppendBug.zip](https://github.com/user-attachments/files/16767838/ExportedArrayAppendBug.zip)
| bug,topic:core,topic:gdscript,needs testing | low | Critical |
2,490,163,683 | storybook | [Bug]: Docs pages failing to load due to an intermittent Vite error? | ### Describe the bug
My docs pages (.mdx files) fail to load, apparently because something is referencing a chunk that doesn't exist on disk.
The error I see is that the docs file failed to load, with a message of
```
Failed to fetch dynamically imported module: http://localhost:6006/src/stories/Overview.mdx
```
<img width="988" alt="Screenshot 2024-08-27 at 12 11 08 PM" src="https://github.com/user-attachments/assets/9c8dea71-973d-444b-8881-524094c61b05">
And on the console there are errors from Vite:
```
Sourcemap for "/virtual:/@storybook/builder-vite/setup-addons.js" points to missing source files
Sourcemap for "/virtual:/@storybook/builder-vite/vite-app.js" points to missing source files
12:08:58 PM [vite] ✨ new dependencies optimized: @storybook/web-components
12:08:58 PM [vite] ✨ optimized dependencies changed. reloading
12:08:59 PM [vite] ✨ new dependencies optimized: @storybook/blocks
12:08:59 PM [vite] ✨ optimized dependencies changed. reloading
The file does not exist at "/Users/justin/Projects/Lit/inspector-elements/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-CQA3IIBG.js?v=13ed6ef4" which is in the optimize deps directory. The dependency might be incompatible with the dep optimizer. Try adding it to `optimizeDeps.exclude`.
```
```
$ ls -al node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps
```
shows me that the file is indeed not there:
```
...
-rw-r--r-- 1 justin staff 1512 Aug 27 12:08 chunk-BUMXE3Y6.js
-rw-r--r-- 1 justin staff 1918 Aug 27 12:08 chunk-BUMXE3Y6.js.map
-rw-r--r-- 1 justin staff 1100 Aug 27 12:08 chunk-CHZ4HPWR.js
-rw-r--r-- 1 justin staff 1798 Aug 27 12:08 chunk-CHZ4HPWR.js.map
-rw-r--r-- 1 justin staff 433 Aug 27 12:08 chunk-CN4Y6LVA.js
-rw-r--r-- 1 justin staff 1023 Aug 27 12:08 chunk-CN4Y6LVA.js.map
-rw-r--r-- 1 justin staff 622 Aug 27 12:08 chunk-DLDDP7T2.js
-rw-r--r-- 1 justin staff 1589 Aug 27 12:08 chunk-DLDDP7T2.js.map
...
```
If I try to clean everything and start again, I'll get the same error with the exact same chunk name:
```
git clean -xffd && npm ci && npm run storybook --watch
```
This setup _was_ working for me, so I've tried to bisect the issue, which made me only more confused. There is no particular commit at which this error appears. Sometimes I rewind history and the pages start working, and then fail as I progress forward through history, but at different commits. Occasionally, head will work, though it seems to be consistently broken as of now.
I _tried_ to make a reproduction at https://stackblitz.com/github/elematic/inspector-elements/tree/31d3aa303e89a2f757cb5165d0c3d616199cc4e2?file=README.md but that seems to *work*.
I then tried to reproduce the Stackblitz environment as closely as possible locally, running Node v18.20.3 (and cleaning my local repo again) and it's still broken.
I'm beginning to think that there's some other cache in use that's not local to my project folder. Otherwise, I'm not sure how this could break intermittently, but stay broken once it breaks - sometimes fixing itself if I checkout different commits. This could explain the project working on Stackblitz because it creates a fresh environment each time it loads.
### Reproduction link
https://stackblitz.com/github/elematic/inspector-elements/tree/31d3aa303e89a2f757cb5165d0c3d616199cc4e2?file=README.md
### Reproduction steps
1. Go to the link above
2. Wait for Stackblitz's setup
3. Run `npm run storybook --watch`
4. Unfortunately for a reproduction, the Overview page will probably load fine.
### System
```bash
Storybook Environment Info:
System:
OS: macOS 14.5
CPU: (10) arm64 Apple M1 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 22.3.0 - ~/.nvm/versions/node/v22.3.0/bin/node
Yarn: 1.22.22 - ~/.nvm/versions/node/v22.3.0/bin/yarn
npm: 10.8.1 - ~/.nvm/versions/node/v22.3.0/bin/npm <----- active
Browsers:
Chrome: 128.0.6613.85
Safari: 17.5
npmPackages:
@storybook/addon-essentials: ^8.2.9 => 8.2.9
@storybook/addon-links: ^8.2.9 => 8.2.9
@storybook/blocks: ^8.2.9 => 8.2.9
@storybook/test: ^8.2.9 => 8.2.9
@storybook/web-components: ^8.2.9 => 8.2.9
@storybook/web-components-vite: ^8.2.9 => 8.2.9
eslint-plugin-storybook: ^0.8.0 => 0.8.0
storybook: ^8.2.9 => 8.2.9
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,490,184,385 | create-react-app | Create react app | If you have a general question about Create React App or about building an app with Create React App we encourage you to post in GitHub Discussions instead of this issue tracker. The maintainers and other community members can provide help and answer your questions there: https://github.com/facebook/create-react-app/discussions
If you're looking for general information on using React, the React docs have a list of resources: https://reactjs.org/community/support.html
If you've discovered a bug or would like to propose a change please use one of the other issue templates.
Thanks!
| needs triage | low | Critical |
2,490,184,693 | pytorch | VMAP over GRU: Batching rule not implemented for aten::gru.input | Hello everyone,
I need to implement a VMAP over a complex function that at some point calls a standard [torch.GRU](https://pytorch.org/docs/stable/generated/torch.nn.GRU.html).
However, when the VMAP function is called, a "RuntimeError: Batching rule not implemented" type error is raised (associated with the internal _VF.gru call) as shown in the snippet below . The error seems very similar to the one raised and solved in [functorch issue 1089](https://github.com/pytorch/functorch/issues/1089), but for some reason is not working for me.
Can you help me understand if I'm doing something wrong or if there is indeed still something that needs to be fixed?
This is critical for a project under development, I would really appreciate your help.
Thank you in advance.
Bernardo
----
**VERSIONS:**
python --version => Python 3.10.12
torch.__version__ => 2.4.0+cu121
**CODE TO REPRODUCE**
```
import torch
# Set dimensions
input_size = 10
hidden_size = 2
num_layers=1
sequence_length = 5
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Set GRU
rnn = torch.nn.GRU(input_size, hidden_size, num_layers)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Set unbatched input
input = torch.randn(sequence_length, input_size)
h0 = torch.zeros(num_layers, hidden_size)
# Call GRU
output, _ = rnn(input, h0)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Set batched input for VMAP (along first dimension)
batched_input = input.unsqueeze(0).repeat(3, 1, 1)
# Set GRU function
def my_function(input):
h0 = torch.zeros(num_layers, hidden_size)
output, _ = rnn(input, h0)
return output
# Set VMAP GRU
vmap_gru = torch.vmap(my_function)
# Call VMAP GRU
vmap_output = vmap_gru(batched_input)
```
**ERROR:**
```
RuntimeError Traceback (most recent call last)
[<ipython-input-9-dfebfb7b7a06>](https://localhost:8080/#) in <cell line: 26>()
24 vmap_gru = torch.vmap(my_function)
25 # Call VMAP GRU
---> 26 vmap_output = vmap_gru(batched_input)
7 frames
[/usr/local/lib/python3.10/dist-packages/torch/_functorch/apis.py](https://localhost:8080/#) in wrapped(*args, **kwargs)
199
200 def wrapped(*args, **kwargs):
--> 201 return vmap_impl(
202 func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs
203 )
[/usr/local/lib/python3.10/dist-packages/torch/_functorch/vmap.py](https://localhost:8080/#) in vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
329
330 # If chunk_size is not specified.
--> 331 return _flat_vmap(
332 func,
333 batch_size,
[/usr/local/lib/python3.10/dist-packages/torch/_functorch/vmap.py](https://localhost:8080/#) in fn(*args, **kwargs)
46 def fn(*args, **kwargs):
47 with torch.autograd.graph.disable_saved_tensors_hooks(message):
---> 48 return f(*args, **kwargs)
49
50 return fn
[/usr/local/lib/python3.10/dist-packages/torch/_functorch/vmap.py](https://localhost:8080/#) in _flat_vmap(func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs)
478 flat_in_dims, flat_args, vmap_level, args_spec
479 )
--> 480 batched_outputs = func(*batched_inputs, **kwargs)
481 return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func)
482
[<ipython-input-9-dfebfb7b7a06>](https://localhost:8080/#) in my_function(input)
19 def my_function(input):
20 h0 = torch.zeros(num_layers, hidden_size)
---> 21 output, _ = rnn(input, h0)
22 return output
23 # Set VMAP GRU
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
1554
1555 def _call_impl(self, *args, **kwargs):
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1563
1564 try:
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/rnn.py](https://localhost:8080/#) in forward(self, input, hx)
1137 self.check_forward_args(input, hx, batch_sizes)
1138 if batch_sizes is None:
-> 1139 result = _VF.gru(input, hx, self._flat_weights, self.bias, self.num_layers,
1140 self.dropout, self.training, self.bidirectional, self.batch_first)
1141 else:
RuntimeError: Batching rule not implemented for aten::gru.input. We could not generate a fallback.
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4400.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] torch==2.4.0+cu121
[pip3] torchaudio==2.4.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.19.0+cu121
[conda] Could not collect
cc @zou3519 @Chillee @samdow @kshitij12345 | triaged,module: vmap,module: functorch | low | Critical |
2,490,192,438 | pytorch | torch.angle cuda version not implemented for bfloat16 and half (float16) | ### 🐛 Describe the bug
Calling torch.angle with a tensor of dtype bfloat16 or half (float16) gives out errors like this:
```Shell
Traceback (most recent call last):
File "/tmp/bug_angle_f16_bf16.py", line 12, in <module>
out4 = torch.angle(input_2.cuda())
RuntimeError: "angle_cuda" not implemented for 'Half'
```
Code: [gist](https://gist.github.com/jiren-the-gray/5b17ef1675ba26edc31eedd72cf2adab) [colab](https://colab.research.google.com/drive/10DWHz3XdyvVfqAlfm7HqwaTlbjOPn4S8?usp=sharing)
I also saw a [discussion](https://discuss.pytorch.org/t/current-cuda-device-does-not-support-bfloat16-please-switch-dtype-to-float16/201564) that indicated bfloat16 is supported on ampere or newer. But I ran it on a RTX 4090. Here's the output of my nvidia-smi:
```Shell
Tue Aug 27 15:40:58 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.02 Driver Version: 555.42.02 CUDA Version: 12.5 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 0% 42C P8 28W / 450W | 146MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1884 G /usr/lib/xorg/Xorg 116MiB |
| 0 N/A N/A 1958 G /usr/bin/gnome-shell 13MiB |
+-----------------------------------------------------------------------------------------+
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700KF
Stepping: 1
CPU MHz: 3400.000
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 16 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 py310heeff2f4_0
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ptrblck @msaroufim | module: cuda,triaged,enhancement | low | Critical |
2,490,211,256 | TypeScript | autoImportFileExcludePatterns should have more nuance than just true/false | ### 🔍 Search Terms
autoImportFileExcludePatterns
auto import
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
A big point of usefulness for me regarding the `typescript.preferences.autoImportFileExcludePatterns` setting is that I can have a subdirectory with internal implementation details and avoid inadvertently having other parts of the application import those internal details when they contain exports with the same or a similar name to one I'm trying to auto import.
Problems:
- If I break apart my internal implementation into various internal files, the setting also prevents auto importing from working even within code inside the excluded directory. Essentially it means I miss out on the benefit of auto importing when working on the internal implementation itself.
- The externally-facing export that utilises the internal functionality and is essentially the main facade exposing that functionality -- the one that I _want_ the rest of the project to be able to auto import -- cannot live in the same directory as its internals. This is less of an issue than the first point, but it does reduce developer freedom with respect to choices regarding file and folder organisation.
My suggestion is that the current design, which looks like this:
```json
{
"typescript.preferences.autoImportFileExcludePatterns": [
"**/*.internal.ts",
"**/*.internal/**",
"**/internal/**",
],
}
```
Could be improved to support object entries in addition to file path strings. I'd suggest something like this:
```jsonc
{
"typescript.preferences.autoImportFileExcludePatterns": [
"node_modules/**/*",
// Use an object to define an auto import scope
{
// Exclusion patterns that are part of this scope:
"match": [
"**/*.internal.ts",
"**/*.internal/**",
"**/internal/**"
],
// Among exclusion patterns in this scope, the following should be filtered out of the exclusion list:
"exceptions": [
"**/index.ts",
"**/*.public.ts"
],
// If false, the exclusion patterns defined in this scope will not be active for files excluded from auto-import
// because of this scope. In other words, when editing the actual files that this scope excludes from
// being auto imported elsewhere, the excluded files themselves will still be able to auto import each other.
"propagateExclusionScopeInternally": false
},
"other/patterns/**/*",
"etc/*"
]
}
```
### 📃 Motivating Example
Essentially the current design of the `autoImportFileExcludePatterns` feature is too much of a blunt instrument. Exclusions are not just useful for dead code and things that nobody should ever want to accidentally import, but also for preventing auto imports within a project from grabbing internal implementation details from subsystems implemented within that project's codebase. A project might have many internal subsystems, each with its own folder containing internal implementation details. While working on those implementation details, you still want auto imports to work among files in that scope -- it's just that we don't want files _outside_ that scope to be able to auto import them. `autoImportFileExcludePatterns` gives us a half measure in terms of the second part of that equation, but it makes working on the internal implementation details more cumbersome as we have to manually type in any import statements for files within the scope.
### 💻 Use Cases
1. What do you want to use this for?
Editing. This is a developer ergonomics issue.
2. What shortcomings exist with current approaches?
Already covered.
4. What workarounds are you using in the meantime?
Accepting increased editing friction when using `autoImportFileExcludePatterns`. | Suggestion,Awaiting More Feedback,Domain: Auto-import | low | Minor |
2,490,212,790 | vscode | No events fired for a restored terminal | Testing #226655
1. create an extension contributed terminal
2. run commands, see events fired
3. reload the window
4. ✅ it persists 🐛 no events are fired | bug,terminal-shell-integration | low | Minor |
2,490,273,982 | TypeScript | Multi-line top-level `await` causes duplicate declaration error | ### 🔎 Search Terms
typescript top-level await export class "duplicate identifier"
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _await, identifiers, modules_
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.7.0-dev.20240827#code/AYKAFAtg9gJgrgGwKYgJBgGZwHYGMAEYSAHgA5QBOALvgEQCWAbACwB05pu2VtAlIaQCGFQRHxNm-MBSQBnRDXoBmAEy8Q+TfgRRcghKwDmSGgAYNWieyiduFzcpWsA7iNIB9Cb3WgA9L-wAN3pBfAALKipSWQAuf2ckACNBWVkkCESEAE8jeiowuETWeihfZ0FEql8YdNLyqhVy2QhfEFwobFkaJrEAXkJBcry0AHUkgEFU9Myc+k6qQW4QqiQAZSoZUTnDMDRUDBNcMLBaGEEFmMFSUgR6PSoS7DKUiABuZLSWABpxgHEAMQAXokAIoAUXGkPGACFoYZ-oZxgANZjQjDOACyAGFxiDxgAJAAihjBAGlBAAVACapgAMthoYFcL8AHJwGBQrGGQywxGI-E4LHOcak6G-ACsYUSIwAqjjnBDIb1enwvnsAN4AX28rxAbQ6XXE80WuCQ+H6PWKxrwSF1JHI1Hw7Xm+A47RwNH6cy6JqQrHtlCosgA2gByKxu7ihgC6+BShECMXwiXohjmVH4vQAfPhsHAMkgKLqQAHHbgEClZPhoXlViZ8Or7PgAMSGChQZxgXhakCavX+fDQeDIfA4doQCBIbjuSfuKhQdwYejEBuaoA
### 💻 Code
```ts
`
(module
(func (export "i64.popcnt") (param i64) (result i32)
local.get 0
i64.popcnt
i32.wrap_i64))
`
// via https://webassembly.github.io/wabt/demo/wat2wasm/
const wasm = (await
WebAssembly.instantiateStreaming(
fetch("data:application/wasm;base64,AGFzbQEAAAABBgFgAX4BfwMCAQAHDgEKaTY0LnBvcGNudAAACggBBgAgAHunCwAKBG5hbWUCAwEAAA=="),
{}));
const instance = wasm.instance;
export const popcount = instance.exports['i64.popcnt'] as (v: bigint) => number;
export class BitSet {
#grow(){}
}
// module uncomment_me_to_fix {}
```
### 🙁 Actual behavior
Line 17 (`export class BitSet`) reports an error: Duplicate identifier 'BitSet'.(2300)
As far as I can see there's no duplicate identifier; suppressing that line with a `@ts-expect-error` produces a strange result that the exported BitSet is unusable when imported into a different module, whereupon Typescript reports:
> Type 'import("./lib/bitset").BitSet' is not assignable to type 'import("./lib/bitset").BitSet'. Two different types with this name exist, but they are unrelated.
> Property '#grow' in type 'BitSet' refers to a different member that cannot be accessed from within type 'BitSet'.ts(2719)
Uncommenting the empty module definition at the bottom of the file appears to collapse the superimposed wave function of `BitSet`, and renders the type singular again.
### 🙂 Expected behavior
To see neither error, whether or not there's a `module` in the same file.
### Additional information about the issue
Hopefully, it's clear enough from the example what I'm hoping to do here with the top-level await; I deleted all the method bodies for brevity, so you'll have to trust me when I say `popcount` is a very useful primitive to have.
I did spend a fair bit of time looking for suggestions on whether I'm holding the tool wrong with the `await ....; export class ...` sequencing, but as far as I can tell that is how it's intended to be used. The fact that an unrelated (to my eye, anyway) expression later on in the file changes the compiler's report is what finally moved me to file an issue against `tsc` here.
Thanks for all your work on Typescript! | Bug,Help Wanted | low | Critical |
2,490,289,457 | godot | Entering Full Screen in macOS makes editor laggy and buggy | ### Tested versions
- 4.3 stable
### System information
Godot v4.3.stable - macOS 14.6.1 - Vulkan (Forward+) - integrated Apple M2 Pro - Apple M2 Pro (10 Threads)
### Issue description
When using Godot in full screen mode, in many places the editor becomes slow to react.
E.g. when only having a single Collision2D node, it can take 0.5-3s for the shapes in the inspector (e.g. New RectangleShape2D) to appear.
Another issue I noticed is that the hints for the buttons in the top (e.g. "Use Smart Snap (Shift + ..." don't appear at all anymore, no matter how long I wait for them to appear.
Both issues immediately disappear when exiting Full Screen. Since they seem related I decided to put this into one bug report, even though one behavior at least works with some delay and one doesn't at all.
### Steps to reproduce
Open a new project. Click the green circle in the top left to enter Full Screen mode. Notice the issue described above. After exiting Full Screen, the issue is gone.
### Minimal reproduction project (MRP)
Happens in empty new project, too. | bug,platform:macos,topic:editor,performance | low | Critical |
2,490,304,963 | pytorch | torch.func.grad uses more memory than expected | ### 🐛 Describe the bug
Repro:
```python
import torch
N_REPEAT = 10
USE_FUNCTORCH = True
def my_grad(fn):
def wrapper(x):
out = fn(x)
return torch.autograd.grad(out, inputs=(x,))
return wrapper
if USE_FUNCTORCH
my_grad = torch.func.grad
def h(x):
for i in range(N_REPEAT):
x = x.sin().cos()
return x.sum()
def get_peak_memory(fn):
torch.cuda.synchronize()
torch.cuda.reset_peak_memory_stats()
fn()
torch.cuda.synchronize()
return torch.cuda.max_memory_allocated()
torch.cuda.memory._record_memory_history(max_entries=100000)
a = torch.rand(1024, 1024, requires_grad=(not USE_FUNCTORCH), device="cuda")
mem = get_peak_memory(lambda: my_grad(h)(a))
torch.cuda.memory._dump_snapshot("./memory_snapshot.pickle")
```
With functorch:
<img width="1572" alt="image" src="https://github.com/user-attachments/assets/c3eeeffd-07c1-4a89-9fb1-d9e50b1d18da">
Without:
<img width="1596" alt="image" src="https://github.com/user-attachments/assets/efb95997-327b-4d59-acf8-924b8411d659">
without (and with retain_graph=True):
<img width="1581" alt="image" src="https://github.com/user-attachments/assets/8fe8a7bc-3ca4-4083-b639-60a62dc62333">
### Versions
main
cc @ezyang @albanD @gqchen @pearu @nikitaved @Varal7 @xmfan @zou3519 @Chillee @samdow @kshitij12345 | module: autograd,module: memory usage,triaged,module: functorch | low | Critical |
2,490,313,159 | flutter | [google_sign_in] Switch Android to Credential Manager | ### Use case
[Google Sign-In for Android is deprecated and will be removed from the Google Play Services Auth SDK. (com.google.android.gms:play-services-auth) in 2025](https://developer.android.com/identity/sign-in/legacy-gsi-migration)
### Proposal
there's already a community [package](https://pub.dev/packages/credential_manager) but having an official one would be better, since it's such core concern | c: new feature,platform-android,p: google_sign_in,package,c: proposal,team-ecosystem,P1,fyi-android,triaged-ecosystem | medium | Major |
2,490,320,021 | flutter | [tool] WebSocketException: WebSocketException: Connection to 'http://localhost:51100/devtools/page/09767C13106A4241DDBA6CD3DBF5ABE5#' was not upgraded to websocket | On 3.24.1: reported 25 times by 18 clients
From `flutter run -d chrome --machine`
```
WebSocketException: WebSocketException: Connection to 'http://localhost:51100/devtools/page/09767C13106A4241DDBA6CD3DBF5ABE5#' was not upgraded to websocket
at _WebSocketImpl.connect(websocket_impl.dart:1011)
at WebSocket.connect(websocket.dart:320)
at WipConnection.connect(webkit_inspection_protocol.dart:231)
at ChromeTab.connect(webkit_inspection_protocol.dart:184)
at Chromium.close(chrome.dart:519)
at <asynchronous gap>(async)
at ChromiumDevice.stopApp(web_device.dart:164)
at <asynchronous gap>(async)
at ResidentWebRunner._cleanup(resident_web_runner.dart:203)
at <asynchronous gap>(async)
at ResidentWebRunner.cleanupAtFinish(resident_web_runner.dart:191)
at <asynchronous gap>(async)
at ResidentRunner.waitForAppToFinish(resident_runner.dart:1482)
at <asynchronous gap>(async)
at RunCommand.runCommand(run.dart:788)
at <asynchronous gap>(async)
at FlutterCommand.run.<anonymous closure>(flutter_command.dart:1408)
at <asynchronous gap>(async)
at AppContext.run.<anonymous closure>(context.dart:153)
at <asynchronous gap>(async)
at CommandRunner.runCommand(command_runner.dart:212)
at <asynchronous gap>(async)
at FlutterCommandRunner.runCommand.<anonymous closure>(flutter_command_runner.dart:420)
at <asynchronous gap>(async)
at AppContext.run.<anonymous closure>(context.dart:153)
at <asynchronous gap>(async)
at FlutterCommandRunner.runCommand(flutter_command_runner.dart:364)
at <asynchronous gap>(async)
at run.<anonymous closure>.<anonymous closure>(runner.dart:130)
at <asynchronous gap>(async)
at AppContext.run.<anonymous closure>(context.dart:153)
at <asynchronous gap>(async)
at main(executable.dart:93)
at <asynchronous gap>(async)
``` | c: crash,P2,team-tool,triaged-tool | low | Minor |
2,490,325,717 | flutter | [shared_preferences] Access to FlutterSharedPreferences data store instance | ### What package does this bug report belong to?
shared_preferences
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
_No response_
### Steps to reproduce
Create a `DataStore` instance pointing to the same file as the SharedPreferences plugin:
```kotlin
val Context.dataStore: DataStore<Preferences> by preferencesDataStore(name = "FlutterSharedPreferences")
```
### Expected results
Accessing data from `FlutterSharedPreferences` in native code would be beneficial. One use case is the native implementation of App Widgets, where a common approach is to communicate via DataStore/SharedPreferences.
Changing the visibility of [`Context.sharedPreferencesDataStore`](https://github.com/flutter/packages/blob/bcb09dbc121e5f886a37cf2808c0d95112276fcd/packages/shared_preferences/shared_preferences_android/android/src/main/kotlin/io/flutter/plugins/sharedpreferences/SharedPreferencesPlugin.kt#L34) to public could solve the problem, but I understand this might not be the ideal solution.
Are there any other options available?
### Actual results
According to the [documentation](https://developer.android.com/topic/libraries/architecture/datastore#correct_usage), we should not create multiple instances for the same file, as this can lead to the following error:
```
There are multiple DataStores active for the same file: /data/user/0/com.example/files/datastore/FlutterSharedPreferences.preferences_pb. You should either maintain your DataStore as a singleton or confirm that there is no two DataStore's active on the same file (by confirming that the scope is cancelled).
```
Currently, there is no way to access a singleton instance of the DataStore.
### Code sample
<details open><summary>Code sample</summary>
The native app widget example is provided to illustrate a potential use case, which, of course, leads to an error due to the use of the same file instance.
```kotlin
package app.getwatermelon.mobile
import android.content.Context
import androidx.compose.runtime.Composable
import androidx.datastore.core.DataStore
import androidx.datastore.preferences.core.Preferences
import androidx.datastore.preferences.core.longPreferencesKey
import androidx.datastore.preferences.preferencesDataStore
import androidx.glance.GlanceId
import androidx.glance.appwidget.GlanceAppWidget
import androidx.glance.appwidget.provideContent
import androidx.glance.text.Text
import kotlinx.coroutines.flow.first
import kotlinx.coroutines.runBlocking
val Context.dataStore: DataStore<Preferences> by preferencesDataStore(name = "FlutterSharedPreferences")
class ExampleAppWidget : GlanceAppWidget() {
val value = longPreferencesKey("value_key")
override suspend fun provideGlance(context: Context, id: GlanceId) {
val preferences = runBlocking { context.dataStore.data.first() }
val value = preferences[value] ?: 0
provideContent {
WidgetContent()
}
}
@Composable
private fun WidgetContent() {
Text("Hello world")
}
}
```
</details>
### Screenshots or Videos
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor -v
[✓] Flutter (Channel stable, 3.24.1, on macOS 14.5 23F79 darwin-arm64, locale en-PL)
• Flutter version 3.24.1 on channel stable at /Users/lawinski/Develop/sdk/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (7 days ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/lawinski/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] Android Studio (version 2024.1)
• Android Studio at /Users/lawinski/Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] IntelliJ IDEA Community Edition (version 2023.1)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] VS Code (version 1.92.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.94.0
```
</details>
| platform-android,p: shared_preferences,package,c: proposal,P3,team-android,triaged-android | low | Critical |
2,490,325,904 | vscode | Order of Compare Selected files by mouse | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0 (Universal)
- OS Version: macOS Version: 12.7.6 (21H1320)
Steps to Reproduce:
When we select two files in these three places
and issue Compare Selected command to compare them,
which file appears on the left of the diff window, is it random?
1. Folders in explorer window:
The first selected files appears on the left in diff window.
No matter on which file of the two the Compare Selected command is executed.
This has been the way for a long time.
2. Open Editors in explorer window:
The file appears on the left in diff window is the file
on which the Compare Selected command is executed.
3. tabs:
same as item 2 Open Editors.
| file-explorer,polish,open-editors | low | Critical |
2,490,354,631 | vscode | calling `terminal.sendText` in `onDidEndTerminalShellExecution` causes terminal to close, process to exit with process 127 | Testing #226655
1. use the extension sample for `terminal-shell-integration`
2. add the following:
```
vscode.window.onDidEndTerminalShellExecution(e => e.terminal.sendText('bye'));
```
3. create a terminal
4. run a command
5. 🐛 see the terminal close
Trace logs:
[logs.txt](https://github.com/user-attachments/files/16769595/logs.txt)
| bug,freeze-slow-crash-leak,terminal-process | low | Minor |
2,490,378,992 | ollama | Prebuilt `ollama-linux-amd64.tgz` without cuda libs, please? | I occasionally update ollama on a linux box by downloading URLs like `https://github.com/ollama/ollama/releases/download/v0.3.7-rc6/ollama-linux-amd64.tgz` and extracting/overwriting files into a local directory (not into `/usr` as a root mind you, just into a local directory as a non-privileged user; that is how I prefer to use it).
I have necessary cuda libs installed in the system.
I don't care to use the libs distributed with ollama to begin with (and if `bin/ollama` defaults to searching libs in `../lib` first I don't love that but that's fine).
But I certainly don't care to download the same 1GB of libs every time I update.
(I wonder how many users are like me).
**I can haz a version of `linux-amd64` without cuda libs included in https://github.com/ollama/ollama/releases prebuilt assets?**
...or should I instead just `git pull` and build the binary from source whenever I want to update (which would be fine with me) or what would you guys recommend? | feature request,linux | low | Major |
2,490,395,935 | vscode | `onDidChangeTerminalShellIntegration` fired twice on terminal creation without `reason` property | Testing #226655
Set a breakpoint in the `terminal-shell-integration` sample here:
```
vscode.window.onDidChangeTerminalShellIntegration(e => {
// breakpoint
}
```
See that it's hit twice. I'd expect it to only be hit once. I read the documentation, which indicates this is fired when any property changes. I wonder if we should have a `reason` on the event to make this clear?
https://github.com/user-attachments/assets/a171f863-bd54-4b5f-9bcb-ebb30b57a07a
| bug,api,terminal-shell-integration | low | Minor |
2,490,419,933 | ant-design | 能不能全局设置一个自己的 Link 组件,不然的话很多组件中使用的 a 标签,都是默认的。 | ### What problem does this feature solve?
比如说,`nextjs`,`next/link` 组件,**如果在 `next/link` 基础上做了一个高阶组件,添加了一些逻辑**。
那么 antd 的 `Anchor`、`Typography.Link`、`Breadcrumb`、`Pagination` 此类组件(应该就这4个用到了链接),都无法使用这个高阶 link 组件了。
我做的这个高阶组件,还要用antd的链接样式和特性,就得这样嵌套,参考下方的嵌套组件的伪代码,这个还只是没有增加自己的逻辑的高阶组件。
```
import Link from 'next/link';
const MyLink = () =>{
<Link passHref legacyBehavior>
<Typography.Link>{children}</Typography.Link>
</Link>
}
```
我希望是,可以通过 `ConfigProvider` 来加入某个自定义的组件,还可以指定是否继承antd的链接样式和特性,就更好了。
### What does the proposed API look like?
例如:
```
<ConfigProvider
typography={{
Link:{
Inherit:true, // 是否继承 Typography.Link 的样式和特性 或者 应该继承的样式,比如 menu 组件中的 a 好像就是普通的html a标签(我没有实际验证),那么就该继承 menu 的 a
render: (children) => <MyLink>{children}</MyLink>; // 渲染自定义 Link 组件
}
}}>
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🗣 Discussion,💡 Feature Request,Inactive | low | Minor |
2,490,430,700 | pytorch | Could not guard on data-dependent expression | @anijain2305 I tried this call (see complete relevant code below)
```
pre_autograd_aten_dialect = torch.export.export(model, args=(x , x_dict, device), strict=True)
```
removing the ```device``` parameter as I do not use it (one of the changes @angelayi had me make a few months ago).
Execution failed again. The error is very different (at least to my uneducated eye).
You can find the relevant portion of the code, and the complete error log below.
If you have time, please take a look and let me know what you think.
Thanks
CODE
=====
```
x, y, x_dict = send_to_device(input_data, device, config)
pytree.register_pytree_node(edict,
flatten_fn=_dict_flatten,
unflatten_fn=_dict_unflatten,
serialized_type_name="EasyDict",
flatten_with_keys_fn=_dict_flatten_with_keys
)
pre_autograd_aten_dialect = torch.export.export(model, args=(x, x_dict), strict=False)
aten_dialect: ExportedProgram = export(pre_autograd_aten_dialect, (x, x_dict), strict=False)
edge_program: EdgeProgramManager = to_edge(aten_dialect)
to_be_lowered_module = edge_program.exported_program()
from executorch.exir.backend.backend_api import LoweredBackendModule, to_backend
lowered_module = edge_program.to_backend(XnnpackPartitioner())
print(" - train_minimum - Lowering the Whole Module - lowered_module - ", lowered_module)
save_path = save_path = "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/LocationPrediction/loweredModels/tpt_delegate.pte"
with open(save_path, "wb") as f:
f.write(lowered_module.to_executorch().buffer)
```
ERROR LOG
=========
```
I0827 16:24:51.507875 140126416844096 torch/fx/experimental/symbolic_shapes.py:3317] create_unbacked_symbool u0 [0, 1] (_subclasses/fake_impls.py:381 in local_scalar_dense)
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] failed while running evaluate_expr(*(Eq(u0, 1), None), **{'fx_node': None})
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] Traceback (most recent call last):
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/fx/experimental/recording.py", line 245, in wrapper
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] return fn(*args, **kwargs)
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5205, in evaluate_expr
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] raise self._make_data_dependent_error(
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(u0, 1) (unhinted: Eq(u0, 1)). (Size-like symbols: none)
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281]
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] Potential framework code culprit (scroll up for full backtrace):
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 971, in _detect_is_causal_mask
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] make_causal = bool((mask == causal_comparison).all())
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281]
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] For more information, run with TORCH_LOGS="dynamic"
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281]
E0827 16:24:51.554366 140126416844096 torch/fx/experimental/recording.py:281] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
Traceback (most recent call last):
File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/LocationPredictionContextQ/main.py", line 68, in <module>
res_single = single_run(train_loader, val_loader, test_loader, config, device, log_dir)
File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/LocationPredictionContextQ/main.py", line 23, in single_run
model, perf = get_trainedNets(config, model, train_loader, val_loader, device, log_dir)
File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/LocationPredictionContextQ/utils/utils.py", line 47, in get_trainedNets
best_model, performance = trainNet(config, model, train_loader, val_loader, device, log_dir=log_dir)
File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/LocationPredictionContextQ/utils/train.py", line 418, in trainNet
pre_autograd_aten_dialect = torch.export.export(model, args=(x, x_dict), strict=False)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/export/__init__.py", line 174, in export
return _export(
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/export/_trace.py", line 945, in wrapper
raise e
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/export/_trace.py", line 928, in wrapper
ep = fn(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/export/exported_program.py", line 89, in wrapper
return fn(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/export/_trace.py", line 1455, in _export
aten_export_artifact = export_func(
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/export/_trace.py", line 1317, in _non_strict_export
aten_export_artifact = _export_to_aten_ir(
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/export/_trace.py", line 583, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/export/_trace.py", line 1268, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1131, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1350, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 562, in create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 163, in inner
flat_f_outs = f(*flat_f_args)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 178, in flat_fn
tree_out = fn(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 748, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/export/_trace.py", line 1255, in forward
tree_out = self._export_root(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/LocationPredictionContextQ/models/MHSA.py", line 48, in forward
out = self.encoder(emb, mask=src_mask, src_key_padding_mask=src_padding_mask)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 413, in forward
is_causal = _detect_is_causal_mask(mask, is_causal, seq_len)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 971, in _detect_is_causal_mask
make_causal = bool((mask == causal_comparison).all())
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 414, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/fx/experimental/recording.py", line 245, in wrapper
return fn(*args, **kwargs)
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5205, in evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(u0, 1) (unhinted: Eq(u0, 1)). (Size-like symbols: none)
Potential framework code culprit (scroll up for full backtrace):
File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 971, in _detect_is_causal_mask
make_causal = bool((mask == causal_comparison).all())
```
_Originally posted by @adonnini in https://github.com/pytorch/pytorch/issues/120219#issuecomment-2313454912_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @bhosmer @cpuhrsch @erichan1 @drisspg | module: nn,triaged,oncall: pt2,module: dynamic shapes,oncall: export | low | Critical |
2,490,435,751 | rust | Method probe should consider where clauses on method | Given:
```rust
use core::ops::Deref;
struct W<T>(T);
struct X;
impl<T> Deref for W<T> {
type Target = X;
fn deref(&self) -> &Self::Target { &X }
}
trait A {}
trait B {}
impl<T: A> W<T> {
fn a(&self) {} // EXAMPLE A
}
impl<T> W<T> {
fn b(&self) where T: B {} // EXAMPLE B
}
impl X {
fn a(&self) {}
fn b(&self) {}
}
fn main() {
let w = W(());
w.a(); // Works.
w.b(); // Doesn't work.
}
```
I expected this code to work. Whether I place the where clause on the impl block (like in example A) or on the method (like in example B) should not matter.
Method probing should assemble the method's "own" where clauses so we can use them. This would've prevented the regression in https://github.com/rust-lang/rust/issues/129601, since #129449 rearranged some where clause bounds for readability.
---
Let's not actually fix this until the new solver has landed, since it's likely to cause spurious new overflows in practice, which are fatal. In the new solver, it should be fine 👍
We could *technically* support this in the old solver, if we were to filter out any predicates that mention the method's generics. But this seems to be a hack that I'd need convincing is worthwhile rather than waiting to do it the "right" way...
There's also theoretically more places for incompleteness to guide inference on the args, but I expect that to not be an issue TBH, since we already (afaict) process obligations before/while doing argument checking. | C-enhancement,T-types | low | Minor |
2,490,461,007 | deno | Horrible performances with some npm packages (but fine on other runtimes) ? | I'm trying to use unified/remark/rehype to render markdown and the performances on deno specifically are horrible.
As demonstrated below, it's 17.5s for startup time on deno, while bun only take 1.2s and node (not using ts) 2.3s.
The 8x-17x time factor between deno and other runtimes seems really abnormal.
```diff
vscode ➜ /workspaces/libs/markdown $ time bun test.ts
ok
+ real 0m1.210s
user 0m0.119s
sys 0m0.172s
vscode ➜ /workspaces/libs/markdown $ time deno test.ts
ok
- real 0m17.546s
user 0m0.126s
sys 0m0.646s
vscode ➜ /workspaces/libs/markdown $ time node --experimental-modules test.mjs
ok
! real 0m2.342s
user 0m0.257s
sys 0m0.124s
```
Note that running the same through https://esm.sh imports yields extremely good result (even when deps are not cached):
```diff
vscode ➜ /workspaces/libs/markdown $ time deno run esm_test.ts
ok
+ real 0m1.326s
user 0m0.138s
sys 0m0.016s
vscode ➜ /workspaces/libs/markdown $ time deno run esm_test.ts
ok
+ real 0m0.057s
user 0m0.038s
sys 0m0.029s
```
Version:
```
vscode ➜ /workspaces/libs/markdown $ deno --version
deno 1.46.1 (stable, release, x86_64-unknown-linux-gnu)
v8 12.9.202.2-rusty
typescript 5.5.2
```
Reproduction:
<details>
<summary><code>test.ts</code></summary>
```ts
import { unified, type Processor as _Processor } from "unified"
import remarkRehype from "remark-rehype"
import remarkParse from "remark-parse"
import rehypeRaw from "rehype-raw"
import rehypeStringify from "rehype-stringify"
// Prevent tree-shaking
if ((unified && remarkRehype && remarkParse && rehypeRaw && rehypeStringify)) {
console.log("ok")
}
```
</details>
<details>
<summary><code>deno.jsonc</code></summary>
```json
{
"imports": {
"unified":"npm:unified@11",
"rehype-raw":"npm:rehype-raw@7",
"rehype-stringify":"npm:rehype-stringify@10",
"remark-parse":"npm:remark-parse@11",
"remark-rehype":"npm:remark-rehype@11",
},
}
```
</details>
| perf,windows | low | Major |
2,490,462,969 | vscode | Should we add a docs link to the `.github/dependabot.yml`? | Testing #226686
I wonder if it'd be helpful / logical to add an (i) docs link here:

Like what we do for Templates and Features:

| feature-request,containers | low | Minor |
2,490,518,352 | deno | Cleanup after DENO_FUTURE is enabled by default | After this PR lands: https://github.com/denoland/deno/pull/25213 we need to do following cleanup
- [x] make stabilized APIs handled in `99_main.js` by default, currently it's FFI, WebGPU and FS APIs, see `cli/args/mod.rs` in `unstable_features()`; make sure their type declarations are in the stable namespace (@bartlomieju)
- [x] remove deprecated APIs one-by-one with their associated ops and type declarations (@iuioiua) #22079
- [x] remove ignored `unstable_` tests in `tests/integration/run_tests.rs` with associated files (@bartlomieju)
- [x] remove `tests/integration/js_unit_tests_future.rs` (@bartlomieju)
- [x] remove `if DENO_FUTURE` conditionals one-by-one (@bartlomieju)
- [x] reenable lockfile tests (@dsherret)
- `specs::lockfile::only_package_json`
- `specs::lockfile::frozen_lockfile::non_analyzable_dynamic_jsr`
- `specs::lockfile::frozen_lockfile::non_analyzable_dynamic_http`
- `specs::lockfile::frozen_lockfile::error_with_new_jsr_dep`
- `specs::lockfile::frozen_lockfile::error_with_new_npm_dep`
- `specs::lockfile::frozen_lockfile::errors_if_creates_lockfile`
- `specs::lockfile::frozen_lockfile::non_analyzable_dynamic_npm`
- `specs::lockfile::frozen_lockfile::lockfile_config`
- [x] decide if these tests should be updated or removed
- [x] `specs::cache::package_json` (@satyarohith)
- [x] `specs::run::no_deno_json::auto_discovered` (@satyarohith)
- [x] `specs::run::no_deno_json::auto_discovered_arg` (@satyarohith)
- [x] `specs::run::package_json::invalid_value` (@satyarohith)
- [x] `specs::run::workspaces::explicit_import_map`
- [x] `task::task_package_json_node_modules_dir_false` (@satyarohith)
- [x] tests that need to be rewritten to spec tests - also duplicate them to test old and new behavior
- [x] `specs::publish::npm_workspace_jsr_pkg_with_npm_dep::bare_specifier`
- [x] `specs::publish::npm_workspace_jsr_pkg_with_npm_dep::dep_and_workspace_dep`
- [x] `specs::npm::workspace_basic::no_exports_sub_path_not_exists`
- [x] `specs::npm::workspace_basic::exports_sub_path_not_exists`
- [x] `specs::npm::workspace_sub_deno_json::member_with_deno_json`
- [x] `specs::npm::workspace_sub_deno_json::member`
- [x] `cache::lock_deno_json_package_json_deps`
- [x] `check::package_json_basic`
- [x] `check::package_json_fail_check`
- [x] `check::package_json_with_deno_json`
- [x] `info::package_json_basic`
- [x] `test::package_json_basic`
- [x] `run::package_json_auto_discovered_for_npm_binary`
- [x] `run::package_json_with_deno_json`
- [x] `task::task_both_package_json_selected`
- [x] `npm::node_modules_import_run` (@satyarohith)
- [x] `npm::node_modules_import_check` (@satyarohith)
- [ ] `npm::reload_info_not_found_cache_but_exists_remote`
- [x] `npm::local_dir_resolves_symlinks` (@satyarohith)
- [ ] possibly actual bugs:
- [ ] `compile::compile_npm_specifiers`
- [ ] `task::task_package_json_npm_bin`
- [ ] `task::task_npx_on_own` (hangs)
- [x] remove import assertion support for TS files and remove `specs::run::ts_import_assertions`
- [x] LSP tests that need to be fixed
- <s>`lsp::lsp_node_modules_dir`</s>
- <s>`lsp::lsp_npm_workspace`</s>
- [x] WPT
- `"/html/semantics/scripting-1/the-script-element/module/dynamic-import/microtasks/css-import-in-worker.any.worker.html - import() should not drain the microtask queue if it fails because of the 'type: css' assertion in a worker"`
- `"/html/semantics/scripting-1/the-script-element/module/dynamic-import/microtasks/with-import-assertions.any.html - import() should not drain the microtask queue if it fails while validating the 'type' assertion"`
- `"/html/semantics/scripting-1/the-script-element/module/dynamic-import/microtasks/with-import-assertions.any.worker.html - import() should not drain the microtask queue if it fails while validating the 'type' assertion"` | refactor | low | Critical |
2,490,518,612 | go | all, x/build/cmd/relui: automate go directive maintenance in golang.org/x repositories | ## Abstract
The value of the `go` directive in golang.org/x repositories is automatically maintained to be at least 1.(N-1).0, where Go 1.N is the most recent major Go release, and Go 1.(N-1) is the previous major Go release.
## Background
In the beginning, there was the GOPATH mode and versions of dependencies of golang.org/x repositories weren't explicitly tracked. Go 1.11 introduced the module mode, and over time it became the default mode. All golang.org/x repositories had an initial go.mod file checked in, and that file was maintained manually. This meant that a bug fix or a new feature in one golang.org/x repository didn't benefit another golang.org/x repository until someone chose to manually update that dependency. It also meant that eventual updates sometimes jumped many versions at once to catch up. This was resolved in 2022, when an automated monthly relui workflow began to create tags and update golang.org/x dependencies across all golang.org/x repositories (issue #48523).
At this point there are upwards of 35 [golang.org/x](https://golang.org/x) repositories. Owners of each repository update the "go" directive manually, ad-hoc, so golang.org/x repositories may receive different levels of "go" directive maintenance. For example, owners of the golang.org/x/mod module wished to use the new-to-Go-1.22 `go/version` package as soon as Go 1.23 came out, and so its "go" directive was recently updated to "1.22.0". On the other hand, golang.org/x/image hasn't been updated in a while, and its "go" directive is currently still at "1.18", which itself was an upgrade from "1.12" in [CL 526895](https://go.dev/cl/526895) as part of bringing all golang.org/x repos to use at minimum Go 1.18 language version (issue #60268).
Leaving go directive maintenance to be done entirely manually creates the possibility of some repositories staying on an older Go language version longer. When there's enough of a need to finally upgrade it to a recent Go language version, this requires a change across multiple major Go releases at once, which can be harder to review. Having continuous, smaller incremental upgrades requires creating many CLs for all of golang.org/x repositories every 6 months, which is toilsome if always done manually.
## Design
Design document at [go.dev/design/69095-x-repo-continuous-go](https://go.googlesource.com/proposal/+/HEAD/design/69095-x-repo-continuous-go.md).
CC @golang/release. | Builders,Proposal,Proposal-Accepted | medium | Critical |
2,490,524,792 | TypeScript | Design Meeting Notes, 8/27/2024 |
# Parameterizing `TypedArray`s
https://github.com/microsoft/TypeScript/pull/58573
* We didn't get the chance to add the es2024 target
* `ArrayBuffer` got new members that `SharedArrayBuffer` does not have.
* Previously, `SharedArrayBuffer` just had two members apart from `byteLength` and `slice`, making them interchangeable.
* es2024 has new members for each of these.
* Proposed changes make them no longer interchangeable.
* `ArrayBufferLike` is the best type to describe both.
* Why?
* The WebCrypto APIs only allow `ArrayBuffer`, and not `SharedArrayBuffer`,
* e.g. `crypto.subtle.digest`
* Also, `ArrayBuffer` is not a transferable object.
* Makes it hard when you try to get the underlying buffer via `someUint8Array.buffer`.
* What is the underlying idea of how these compose?
* `ArrayBuffer` is a non-indexable span of memory. You use an "ArrayBuffer view" to access the memory.
* They're not thread-safe. They're only meant to be read/written from within a single thread. If you want to share memory, you either copy the memory or transfer it entirely.
* `SharedArrayBuffer` looks like an `ArrayBuffer` but operates over memory in a shared global heap and has has unordered but sequentially consistent writes.
* So the idea is to parameterize each of these views over the underlying buffer type.
```ts
interface Uint8Array<Buffer extends ArrayBufferLike = ArrayBufferLike> {
// ...
readonly buffer: Buffer;
// Most methods and constructors return a view with a local-only ArrayBuffer
new (length: number): Uint8Array<ArrayBuffer>;
// (not this one)
new <T extends ArrayBufferLike>(buffer: T, byteOffset?: number, length?: number): Uint8Array<T>;
new (array: ArrayLike<number> | ArrayBuffer): Uint8Array<ArrayBuffer>;
// ...
filter(predicate: /*...*/): Uint8Array<ArrayBuffer>;
}
```
* **Note the above code is roughly transcribed, don't look at this as precise.**
* Problems:
* `Buffer` subtypes `Uint8Array`.
* We say if you extend a base type, that base type has to have a consistent construct signature.
* Would have to make `Buffer` generic - and to do that, we would have to start using `typesVersions` because old `UInt8Array` aren't generic.
* Workaround: just change the returned type to `Buffer & WithArrayBufferLike<...>` in the retun types of `slice` and `subarray`.
* Why not just forward-declare `UInt8Array` as generic with an option type parameter?
* Also, return `this` in some cases.
* What if we fixed up stuff like `crypto.subtle.digest` etc. to accept `SharedArrayBuffer` even though they don't take those?
* Fixes the DOM, but doesn't fix everything.
* Could say the underlying default should be `ArrayBuffer`, not `ArrayBufferLike`.
* We created `ArrayBufferLike` and traditionally these have never had a noticeable difference.
# File Extension Rewriting, `--experimental-transform-types`/`--experimental-strip-types`, and Multi-Project Builds
https://github.com/microsoft/TypeScript/pull/59767
* Last week, we discussed rewriting relative file extensions. Had concerns, mainly around monorepo-style codebases.
* In the meantime, we have a prototype PR.
* Sample project
```ts
// packages/lib/src/math.ts
export function add(a: number, b: number) {
return a + b;
}
// packages/lib/src/main.ts
export * from "./math.ts";
// packages/app/src/main.ts
import { add } from "@typescript-node/lib";
console.log(add(1, 2));
```
* By default doesn't, work, but...
```json5
{
// ...
"exports": {
".": {
"typescript": "./src/main.ts",
"import": "./dist/main.js",
}
}
}
```
* Works when you run with `node --conditions typescript`.
* Almost right, but it's not safe to publish TypeScript - if this `exports` map was published to npm and run with `node --conditions typescript`, resolution would fail within the published package.
* One way is to erase here - but no built-in tooling to do this.
* @colinhacks suggested namespacing on a per-package basis for publishing.
```json5
{
// ...
"exports": {
".": {
"@my-special-namespace/source": "./src/main.ts",
"import": "./dist/main.js",
}
}
}
```
* Can also erase these, but not sure what tools do that.
* `moduleSuffixes`
* Nothing special needed there, but you can't really take advantage of extension rewriting in certain circumstances.
* You can't name something `foo.ts.android.ts`, but you also can't write `foo.ts.ts` anyway.
* Probably will be very rare - this is mainly for React Native, and frankly really unhinged to do this.
* Now what if projects don't take advantage of workspaces and just do a direct relative import?
* `import { add as _add } from "../../lib/src/main.ts`
* Won't work if `outDir` is `dist` because it needs to be rewritten to `../../lib/dist/main.ts`.
* It just doesn't work in some circumstances and we can give an error there.
* You still *can* use relative imports - everything just needs to end up in the same output folder. This is, for example, how TypeScript's build works! So even we could do this.
* For clarity: relative imports work for the following...
```
root/
├── src/
│ ├── projA/
│ ├── projB/
│ └── projC/
└── dist/
├── projA/
├── projB/
└── projC/
```
but relative imports *do not* work for the following.
```
projects/
├── projA/
│ ├── src/
│ └── dist/
├── projB/
│ ├── src/
│ └── dist/
└── projC/
├── src/
└── dist/
```
* Sample PR to arethetypeswrong that makes everything work with `--experimental-transform-types`: https://github.com/arethetypeswrong/arethetypeswrong.github.io/pull/194
* Notable interesting details:
* Tests that can work against both the TS source and JS output! One just passes a specific `--conditions`.
* Thought you needed tsconfig custom `conditions`- but you don't. TypeScript's project references are smart enough to map output files to input files.
* Did a regex replace on relative paths - got one wrong in `#internal/getProbableExports.js` to `#internal/getProbableExports`.
* Reinforced the need for good errors.
* What would all this look like without project references?
* Like one big `tsconfig.json`?
* Not necessarily.
* *Probably* works, just need to break things apart by packages and can't use relative paths.
* Should Node automatically have a condition?
* Interesting long-term, but maybe it's good for people to have a specific level of control.
* Boilerplate-y to have to write `--conditions @my-namespace/source` and `@my-namespace/source` throughout `exports`/`imports`, but probably worth the control.
* It's not just boilerplate though, it's more about not having all these conditions published, and not exposing this to users. Really would be ideal if these conditions could be automatically erased before publishing.
* otherwise, we feel good about this. It's not 0-config throughout, but it feels like there is a story between the "I can start a Node server fast" and "I can break my projects apart into multiple pieces"/"I want to publish stuff to npm" that we feel good about.
| Design Notes | low | Critical |
2,490,533,487 | flutter | [Pigeon] Add Equatable conformance for Swift classes | In Swift, a class has to conform to [Equatable](https://developer.apple.com/documentation/swift/equatable) to allow comparisons with the `==` operator. This is particularly useful for me when writing tests. When using Obj-C, it's handy to use the generated `toList` method to compare stuff, but that no longer works with swift as you can't compare `[Any?]` types.
Adding equatable conformance just requires overriding the `==` method like so
```
static func == (lhs: FooType, rhs: FooType) -> Bool {
return lhs.var1 == rhs.var1
&& lhs.var2 == rhs.var2
/// and so on
}
```
I've been using extensions, but it would be nice if this was something part of the generated class already, since it's kind of tedious having to manually write that for every single class. | c: new feature,package,c: proposal,team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Minor |
2,490,534,369 | flutter | kTouchSlop too low and not configurable (RE: kTouchSlop const needs to change from 18 to 64 for touches to feel responsive #137962) | ### Steps to reproduce
This is a follow up on the unresolved-but-closed issue: https://github.com/flutter/flutter/issues/137962
I'm experiencing the same issue implementing a touch-screen keyboard: the 18 (logical) px `kTouchSlop` constant causes typing at a reasonable (not fast) speed to be impractical. This is on a relatively "full size" touch keyboard with keys about 100 (L)px square.
`TapGestureRecognizer` (and also `LongPressGestureRecognizer`) do not allow configuration of a touch slop value. Other gestures (e.g. pan) do support enclosing a `GestureDetector` in a `MediaQuery` with a custom `DeviceGestureSettings` value specifying touch slop.
`TapGestureRecognizer` and `LongPressGestureRecognizer` both extend `PrimaryPointerGestureRecognizer`. `PrimaryPointerGestureRecognizer` does allow configuration of touch slop, but the extending Tap/LongPress recognizers don't expose it for configuration. I've had success creating my own gesture recognizer based on `PrimaryPointerGestureRecognizer`, differing only from `TapGestureRecognizer` in its ability to set touch slop.
In my case, when the slop is upgraded to 64px, the keyboard can be typed on at reasonable/fast speeds. When typing quickly, a user's touch will be less precise and drag more across keys.
Even outside the keyboard, it appears the entire application could benefit from a significantly larger touch slop. With 18px it requires fairly precise taps.
### Expected results
Touch slop should be configurable on a per-application, per gesture-detector, or per-gesture-recognizer basis. `TapGestureRecognizer` and `LongPressGestureRecognizer` should support `MediaQuery`-based `DeviceGestureSettings`.
### Actual results
Touch slop is hardcoded as `kTouchSlop` constant and cannot be modified. `DeviceGestureSettings` do not work on tap/longpress gestures.
### Code sample
child: MediaQuery(
data: MediaQuery.of(context).copyWith(
gestureSettings: const DeviceGestureSettings(
touchSlop: 64,
),
),
child: GestureDetector(
onTap: widget.onTap,
### Screenshots or Video
N/A
### Logs
N/A
### Flutter Doctor output
N/A | framework,f: gestures,c: proposal,P3,team-framework,triaged-framework | low | Minor |
2,490,555,129 | vscode | Change rendered text in readonly markdown code to not include `or p;ress enter to edit` | Open a diff virewer for notebooks with an empty md output.
The following is displayed

Given that the content is readonly, we should not include the text `double-click or press enter to edit` | bug,papercut :drop_of_blood:,notebook-markdown,notebook-output,notebook-diff | low | Minor |
2,490,592,099 | vscode | Second layer of filtering after returning from `FileSearchProviderNew`'s `provideFileSearchResults(...)` | Testing #226668
I suspect here that I am just not too well-versed in how these APIs work and this is as-expected, but thought i'd ask here anyway :)
For this example I have hardcoded a single file to always return from `provideFileSearchResults(...)`
```ts
class mySearchProvider implements vscode.FileSearchProviderNew {
provideFileSearchResults(pattern: string, options: vscode.FileSearchProviderOptions, token: vscode.CancellationToken): vscode.ProviderResult<vscode.Uri[]> {
return [
vscode.Uri.parse('memfs:/josh.txt'),
]
}
}
```
As expected, when I search for any substring of `josh` I see it pop up in the quick open search window

As expected, i've made this utterly useless since it won't actually show me the files in my project. But in this case, I still expected to see `josh.txt` since it was hardcoded (as shown above).

Again, likely a misunderstanding on my part of how the API works. I thought this API was the "source of truth" for providing search results, yet my hardcoding of the value is doing some filtering outside of my (apparent) control.
| search,polish,search-api | low | Minor |
2,490,662,392 | pytorch | LSTM inference c++ threads block on DropoutState | ### 🐛 Describe the bug
I'd doing RL with c++ mutiple threads on a single GPU.
I find find threads are all block was block in lstm foward in get_dropout_state on a static varable:
RNN.cpp
_cudnn_impl(){
...
auto& dropout_state = get_dropout_state(dropout_p, train, input.options());
std::unique_lock<DropoutState> lock{dropout_state};
...
}
get_dropout_state(){
...
static std::vector dropout_state_cache{
static_cast<size_t>(cuda::getNumGPUs())};
...
}
And the DropoutState comments says:
// Every time we use a dropout state, we need to synchronize with its event,
// to make sure all previous uses finish running before this one starts. Once
// we’re done, we record the event to allow others to synchronize with this
// kernel. Those events are really needed only for inter-stream sync on a
// single GPU. I doubt anyone will want to run cuDNN RNNs in parallel on a
// single GPU, so they should end up being complete no-ops.
In fact lstm forwad doesn’t need a DropoutState, but it still mutex lock on it.
Is it possible to bypass the mutex lock in this situation?
### Versions
v2.4.0
cc @jbschlosser @mikaylagawarecki | module: cpp,module: rnn,triaged,module: multithreading | low | Critical |
2,490,713,418 | pytorch | nn.Module.to(memory_format= channels_last format) failed if containing 5D parameters | ### 🐛 Describe the bug
The following code may failed:
```
import torch
from torch import nn
class A(nn.Module):
def __init__(self):
super().__init__()
self.p = nn.Parameter(torch.zeros((1, 8, 1, 1, 256)))
a=A().to(memory_format=torch.channels_last)
```
, which is due to the following code in nn.module:
```
def convert(t):
if convert_to_format is not None and t.dim() in (4, 5):
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
non_blocking, memory_format=convert_to_format)
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
```
Should it do nothing, or just throws a warning if any parameters cannot be converted to channels_last or channels_last_3d?
Or shaw we convert the tensors in different ways, like this?
```
def convert(t):
if ((convert_to_format==torch.channels_last and t.dim() ==4)
or (convert_to_format==torch.channels_last_3d and t.dim() ==5)):
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
non_blocking, memory_format=convert_to_format)
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
```
### Versions
/usr/local/lib/python3.10/runpy.py:126: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Collecting environment information...
PyTorch version: 2.4.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.13 (main, Apr 26 2024, 04:45:52) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-4.15.0-189-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 525.78.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
Stepping: 7
CPU MHz: 1504.649
BogoMIPS: 4600.00
Virtualization: VT-x
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 32 MiB
L3 cache: 44 MiB
NUMA node0 CPU(s): 0-63
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] open-clip-torch==2.24.0
[pip3] torch==2.4.0+cu118
[pip3] torchlaunch==1.0
[pip3] torchvision==0.19.0+cu118
[pip3] triton==3.0.0
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: convolution,triaged | low | Critical |
2,490,734,924 | godot | AudioStreamPlayer, useless checkbox | ### Tested versions
4.3.stable
### System information
windows 10
### Issue description
The first checkbox is useless, the second one is the one that works.

At first i clicked the first checkbox but later down the line realized that the song didn't loop when i thought i clicked a loop feature
### Steps to reproduce
clicking looping checkbox turns on looping checkbox, clicking on turns on checkbox
if both on and one clicks looping it turns both off
### Minimal reproduction project (MRP)
[audio.zip](https://github.com/user-attachments/files/16775544/audio.zip)
| discussion,topic:editor,topic:audio | low | Minor |
2,490,744,298 | godot | Can't assign a node not in the tree to `SceneTree.current_scene` | ### Tested versions
Reproducible in Godot v4.3.stable.mono.official [77dcf97d8] - Not tested in other builds
### System information
Windows 10 - Godot v4.3.stable.mono.official [77dcf97d8]
### Issue description
Method `SceneTree::set_current_scene()` won't allow setting `current_tree` to a node that is not in the tree.
The assertion occurs here:
https://github.com/godotengine/godot/blob/8e80c1070420cf7f9fd9ffcefe9a12f05cfcbb64/scene/main/scene_tree.cpp#L1457
This behavior makes it impossible to implement a custom scene manager that changes the scene manually and properly preserves the behavior of `SceneTree.change_scene_to_file()` and `SceneTree.change_scene_to_packed()` regarding `SceneTree.current_scene`. That is because, when manually changing the scene, the node intended to be the next `current_scene` can't be assigned to `SceneTree.current_scene` until after it is added to the scene; but that's what `SceneTree.change_scene_to_file()` and `SceneTree.change_scene_to_packed()` do. (as seen below)
https://github.com/godotengine/godot/blob/8e80c1070420cf7f9fd9ffcefe9a12f05cfcbb64/scene/main/scene_tree.cpp#L1470
A scene manager made in GDScript or C# can only change `SceneTree.current_scene` after the node is added to the scene. That means `SceneTree.current_scene` will be `null` during `_ready()` and `_enter_tree()`, but you *can* access `SceneTree.current_scene` when changing the current scene from these functions when changing the scene using the native methods.
### Steps to reproduce
Example:
```csharp
using Godot;
public partial class ManualSceneManager : Node
{
public void ChangeToSceneB()
{
// Remove & free current scene
Node currentScene = GetTree().CurrentScene;
currentScene.GetParent().RemoveChild(currentScene);
currentScene.Free();
// Load SceneB
Node sceneB = ResourceLoader.Load<PackedScene>("res://scene_b.tscn").Instantiate();
// Change to SceneB
GetTree().CurrentScene = sceneB; // Error: Condition "p_scene && p_scene->get_parent() != root" is true.
// <C++ Source> scene/main/scene_tree.cpp:1399 @ set_current_scene()
GetTree().Root.AddChild(sceneB);
GetTree().CurrentScene = sceneB; // Here, the assignment is allowed without errors, but SceneB._EnterTree()
// and SceneB._Ready() have already been called while
// GetTree().CurrentScene was null.
}
}
```
### Minimal reproduction project (MRP)
I uploaded a project for this issue, but it's basically just the example code above, nothing special about it.
Link: https://github.com/leonardoraele/mrp_current_scene | discussion,topic:core | low | Critical |
2,490,842,761 | pytorch | [DTensor] use P2P for complicated transformation when redistributing tensor | ### 🚀 The feature, motivation and pitch
# Motivation
For complicated `DTensor` redistribution (e.g. `[S(0), S(1)] -> [S(1), S(0)]`), it's likely that only GPU1 and GPU2 need to communicate (when tensor and mesh are both square) and can be achieved by P2P operations.
The current implementation only applies rule-based redistribution, for the above case, it does the following:
1. `S(1)` -> `R` on mesh dim 1
2. `S(0)` -> `S(1)` on mesh dim 0
3. `R` -> `S(0)` on mesh dim 1
Instead, P2P does:
1. rank1 sends local_tensor to rank2
2. rank2 sends local_tensor to rank1
And they can be done concurrently since there is no data dependency. This helps to optimize both communication volume and intermediate tensor buffer size.
# Experiment
As discussed in #132751, one major concern is that this method cannot utilize comm collectives and might suffer when communicating between 2 nodes. After conducting simple experiments, I believe it still benefits the communication time considering the reduced communication volume.
## Setup
The above case was conducted with a `4*4` mesh (2 nodes, 8 GPUs each, NV8 fully connected and InfiniBand is used), `rule` refers to `main` implementation, and `p2p` refers to the above method. The 2-d square tensor size increases along the x-axis, and execution time is recorded along the y-axis.
## Result
### `[S(0), S(1)] -> [S(1), S(0)]`

### `[S(0), S(0)] -> [S(1), S(1)]`

# Implementation: Doing P2P and rule-based in a hybrid way
[benchmark file](https://github.com/botbw/pytorch/blob/main/bench1.py)
I do observe P2P suffers in some cases, especially when the redistribution can be done using a single collective. Thus I implemented a draft redistribute function such that it utilizes collectives whenever possible, and uses P2P to handle the rest.
I roughly tested the implementation with different mesh settings: `2*2`, `4*2`, `8*2`, `4*4` (1 node for the first 2 settings and 2 nodes for the last 2), the microbenchmark was done with `(32, 8192, 4096)` tensor redistributing with any placement combination from [R, S(0), S(1), S(2)] (4 ** 4 in total). The performance is as follows (green dots indicate that this implementation reduces communication time):




And the hybrid implementation doesn't hurt the performance we got from *Experiment* section:
### `[S(0), S(1)] -> [S(1), S(0)]`

### `[S(0), S(0)] -> [S(1), S(1)]`

# Other
I used additional buffers when using P2P for easier implementation so I didn't test on memory optimization. If you guys think P2P makes sense considering the experiment above, do let me know and I'm happy to work on this.
You can find the above experiment code and draft implementation in [this fork](https://github.com/botbw/pytorch/), the p2p implementation passed [test_redistribute_p2p.py](https://github.com/botbw/pytorch/blob/main/test/distributed/_tensor/test_redistribute_p2p.py), which is modified from `test_redistribute.py`
cc: @wanchaol
### Alternatives
_No response_
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Major |
2,490,850,049 | PowerToys | An option to revert to the previous UI for Advanced Paste | ### Description of the new feature / enhancement
Reverting to the previous UI will be much more efficient and keyboard-friendly.
### Scenario when this would be used?
All the time. Having to grab the mouse to access history is a terrible click-tax.
### Supporting information
Please 🙏 | Needs-Triage | low | Minor |
2,490,854,464 | stable-diffusion-webui | [Bug]: Including reverse proxy like this is a huge security liability + bugs | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Began debugging why I was able to access service from an internal ip (docker net) but not via my own traefik proxy, and then realized I could also access it from a *.gradio.live url given that the docker template I was using had a share flag included in the entrypoint.
### Steps to reproduce the problem
Start webui.sh without understanding the security risks of opening a connection to an arbitrary url.
### What should have happened?
Do not package a reverse proxy server with this application and then expect people to install it on their home pc's. It's a liability, regardless of your intention. Even inside of a container ecosystem, this is a security risk given that the user didn't configure it themselves. This is potentially a vector for many now insecure devices, and this is not some novel concept.
There are plenty of exploits posted around the web for gradio.live specifically, but allowing users to unknowingly add a `--share` without likely understanding that implication is extremely careless at best. Consider removing the ability to proxy gradio all together. Consider including a readme with instructions on how to properly set up a secure reverse proxy themselves using nginx + best practices.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
Ubuntu Server x86_64 (or any other operating system)
### Console logs
```Shell
None.
```
### Additional information
_No response_ | bug-report | low | Critical |
2,490,854,575 | go | net/http: using io.Copy to copy a file to http.ResponseWriter does not use sendfile | ### Go version
go version go1.22.6 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/root/.cache/go-build'
GOENV='/root/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/root/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/root/go'
GOPRIVATE=''
GOPROXY='https://*********'
GOROOT='/root/server/go'
GOSUMDB='******************'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/root/server/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.22.6'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='0'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/data/tmp/go-build1438424244=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Implement a Handler for http.Server and try to streaming a file:
``` Go
func (handler Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
file, _ := os.Open("file")
io.Copy(w, file)
}
```
### What did you see happen?
`io.Copy` not using `sendfile()` when copy from `*os.File` to `http.ResponseWriter`:
1. `io.Copy` first try to use `WriterTo` from `*os.File`, but `*http.response` does not implement syscall.Conn interface;
2. fallback to `genericWriteTo` in `WriteTo` of `*os.File`;
3. file got wrapped in `os.fileWithoutWriteTo` (file.go:269)
4. another call to `io.Copy` try to use `*http.response.ReadFrom()` which calls `sendfile()`
5. which trys to use `TCPConn.ReadFrom()`
6. src is not a `*os.File` but a `os.fileWithoutWriteTo` wrapper. (sendfile_linux_go:20)
7. fallback to `genericReadFrom()` which will do all the copying stuff (tcpsock_posix.go:54)
### What did you expect to see?
calling `sendfile()` in either `*os.File.WriteTo` Or `*http.response.ReadFrom`; | NeedsInvestigation | low | Minor |
2,490,918,230 | vscode | Features required from Multi-diff editor for Notebook diffing | Feature requests for multi file editor widget:
* [ ] Ability to configure an editor as editable (the right hard editor)
* Currently cell text models are virtual documents (and they are readonly `TextResourceEditorModel`)
* Unless we make make some changes to have another instance of `BaTextEditorModel` thats read-write (based on whether the notebook document is read-write)
* [ ] Custom context for toolbar actions (currently hardcoded to pass uri) https://github.com/microsoft/vscode/issues/204074
* [ ] Custom collapsible sections (with a hierarchy)
* Custom borders to clearly distinguish between different cells
* [ ] Toggle white space differences per item (or resource type)
* [ ] Change progress lable from `No Changed Files` to something else (e.g. `Computing Notebook Diffs`)
* [ ] Ability to hide/collapse unchanged cells (like unchanged lines)
* [ ] Ability to navigate to an item (even if its currently not visible, required to navigate to next/previous changed cell)
* [ ] Custom editors (webview support in outputs) https://github.com/microsoft/vscode/issues/206062
* [ ] Diff view ruler
| feature-request,notebook-diff | low | Minor |
2,490,950,175 | react-native | Direct JSC debugging cannot be used in Mac simulator from 0.74.1 | ### Description
When I updated react native to 0.74.1, I found safari devtools(direct JSC debugging) cannot inspect JSContext of the iphone simulator in my mac, which works right in 0.74.0.
I checked the change records and found this commit, it looks like JSC debugging has been banned in MacOS, but I really need it.

https://github.com/facebook/react-native/commit/0a4d97362f5a40cff62edce5200c3e7e8622d912
### Steps to reproduce
1. Init react native project without framework(react native >= 0.74.1), like https://reactnative.dev/docs/getting-started-without-a-framework
2. disable hermes
3. npm run ios(with iPhone simulator)
4. open safari devtools, no JSContext avaliable in iPhone simulator
### React Native Version
0.74.1
### Affected Platforms
Runtime - iOS, Build - MacOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.2.1
CPU: (8) x64 Apple M3
Memory: 28.77 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.11.1
path: ~/.nvm/versions/node/v20.11.1/bin/node
Yarn:
version: 1.22.22
path: ~/.nvm/versions/node/v20.11.1/bin/yarn
npm:
version: 10.2.4
path: ~/.nvm/versions/node/v20.11.1/bin/npm
Watchman:
version: 2024.04.08.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.4
- iOS 17.4
- macOS 14.4
- tvOS 17.4
- visionOS 1.1
- watchOS 10.4
Android SDK: Not Found
IDEs:
Android Studio: 2023.3 AI-233.14808.21.2331.11709847
Xcode:
version: 15.3/15E204a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.74.1
wanted: 0.74.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: false
newArchEnabled: false
```
### Stacktrace or Logs
```text
no Stacktrace or logs
```
### Reproducer
https://github.com/yandadaFreedom/test-jsc-macOS
### Screenshots and Videos
_No response_ | Needs: Author Feedback,Needs: Repro,Newer Patch Available | low | Critical |
2,491,018,149 | ui | [bug]: Duplicate classNames(border) in checkbox, radio group | ### Describe the bug
If I install checkbox in CLI, I encountered duplicate border-color issue.
<img width="500" alt="image" src="https://github.com/user-attachments/assets/ac3fc068-328a-451a-aeaf-6cfdf31ce8aa">
[PR#1089](https://github.com/shadcn-ui/ui/pull/1089) said that this problem has fixed... I found accurate [comment](https://github.com/shadcn-ui/ui/issues/692#issuecomment-1605357192) on related first bug issue, and I checked Input, Alert, TextArea, Select's bug was fixed. But I still encountered same error when I install checkbox, radio group.
`transform-css-vars.ts`'s `applyColorMapping: function - border => border border-border` may cause problem, but I don't know what border-border is... There's anyone knows about what border-border means? 😢
I created this issue because the issue was closed even though the bug was not resolved perfectly.
### Affected component/components
Checkbox, Radio Group
### How to reproduce
1. enter `pnpm dlx shadcn-ui@latest add checkbox or radio-group` on your project.
2. go to ui/checkbox...ui/radio-group
3. you'll see the error.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Mac M1 Pro Max, Latest Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,491,084,509 | rust | rustc got SIGSEGV on cargo install sqlx-cli | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
cargo install sqlx-cli
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
(base) lothrop@helheim:~/src/casinobuddy.app$ rustc --version --verbose
rustc 1.82.0-nightly (91376f416 2024-08-12)
binary: rustc
commit-hash: 91376f416222a238227c84a848d168835ede2cc3
commit-date: 2024-08-12
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
```
### Error output
```
(base) lothrop@helheim:~/src/casinobuddy.app$ cargo install sqlx-cli
Updating crates.io index
Downloaded sqlx-cli v0.8.1
Downloaded 1 crate (100.3 KB) in 0.19s
Installing sqlx-cli v0.8.1
Updating crates.io index
Locking 244 packages to latest compatible versions
Adding addr2line v0.22.0 (latest: v0.24.1)
Adding bitflags v1.3.2 (latest: v2.6.0)
Adding clipboard-win v4.5.0 (latest: v5.4.0)
Adding core-foundation v0.9.4 (latest: v0.10.0)
Adding encode_unicode v0.3.6 (latest: v1.0.0)
Adding endian-type v0.1.2 (latest: v0.2.0)
Adding error-code v2.3.1 (latest: v3.2.0)
Adding fd-lock v3.0.13 (latest: v4.0.2)
Adding foreign-types v0.3.2 (latest: v0.5.0)
Adding foreign-types-shared v0.1.1 (latest: v0.3.1)
Adding generic-array v0.14.7 (latest: v1.1.0)
Adding gimli v0.29.0 (latest: v0.31.0)
Adding hermit-abi v0.3.9 (latest: v0.4.0)
Adding idna v0.5.0 (latest: v1.0.2)
Adding linux-raw-sys v0.4.14 (latest: v0.6.5)
Adding memoffset v0.6.5 (latest: v0.9.1)
Adding miniz_oxide v0.7.4 (latest: v0.8.0)
Adding nix v0.23.2 (latest: v0.29.0)
Adding redox_syscall v0.4.1 (latest: v0.5.3)
Adding rustyline v9.1.2 (latest: v14.0.0)
Adding str-buf v1.0.6 (latest: v3.0.3)
Adding wasi v0.11.0+wasi-snapshot-preview1 (latest: v0.13.2+wasi-0.2.1)
Adding windows-core v0.52.0 (latest: v0.58.0)
Adding windows-sys v0.48.0 (latest: v0.59.0)
Adding windows-sys v0.52.0 (latest: v0.59.0)
Adding windows-targets v0.48.5 (latest: v0.52.6)
Adding windows_aarch64_gnullvm v0.48.5 (latest: v0.52.6)
Adding windows_aarch64_msvc v0.48.5 (latest: v0.52.6)
Adding windows_i686_gnu v0.48.5 (latest: v0.52.6)
Adding windows_i686_msvc v0.48.5 (latest: v0.52.6)
Adding windows_x86_64_gnu v0.48.5 (latest: v0.52.6)
Adding windows_x86_64_gnullvm v0.48.5 (latest: v0.52.6)
Adding windows_x86_64_msvc v0.48.5 (latest: v0.52.6)
Downloaded clap_complete v4.5.24
Downloaded camino v1.1.9
Downloaded foreign-types v0.3.2
Downloaded foreign-types-shared v0.1.1
Downloaded openssl-macros v0.1.1
Downloaded memoffset v0.6.5
Downloaded fd-lock v3.0.13
Downloaded dirs-sys-next v0.1.2
Downloaded dirs-next v2.0.0
Downloaded filetime v0.2.25
Downloaded promptly v0.3.1
Downloaded clap v4.5.16
Downloaded native-tls v0.2.12
Downloaded console v0.15.8
Downloaded openssl-sys v0.9.103
Downloaded rustyline v9.1.2
Downloaded clap_builder v4.5.15
Downloaded nix v0.23.2
Downloaded openssl v0.10.66
Downloaded 19 crates (1.2 MB) in 0.58s
Compiling proc-macro2 v1.0.86
Compiling unicode-ident v1.0.12
Compiling autocfg v1.3.0
Compiling libc v0.2.158
Compiling cfg-if v1.0.0
Compiling serde v1.0.209
Compiling version_check v0.9.5
Compiling typenum v1.17.0
Compiling const-oid v0.9.6
Compiling shlex v1.3.0
Compiling byteorder v1.5.0
Compiling pkg-config v0.3.30
Compiling vcpkg v0.2.15
Compiling scopeguard v1.2.0
Compiling libm v0.2.8
Compiling memchr v2.7.4
Compiling futures-core v0.3.30
Compiling once_cell v1.19.0
Compiling pin-project-lite v0.2.14
Compiling subtle v2.6.1
Compiling tinyvec_macros v0.1.1
Compiling futures-sink v0.3.30
Compiling crossbeam-utils v0.8.20
Compiling tinyvec v1.8.0
Compiling log v0.4.22
Compiling cc v1.1.15
Compiling parking_lot_core v0.9.10
Compiling futures-channel v0.3.30
Compiling futures-io v0.3.30
Compiling base64ct v1.6.0
Compiling futures-task v0.3.30
Compiling itoa v1.0.11
Compiling bytes v1.7.1
Compiling serde_json v1.0.127
Compiling openssl v0.10.66
Compiling generic-array v0.14.7
Compiling ahash v0.8.11
Compiling lock_api v0.4.12
Compiling num-traits v0.2.19
Compiling slab v0.4.9
Compiling percent-encoding v2.3.1
Compiling unicode-bidi v0.3.15
Compiling thiserror v1.0.63
Compiling pin-utils v0.1.0
Compiling foreign-types-shared v0.1.1
Compiling allocator-api2 v0.2.18
Compiling ryu v1.0.18
Compiling zeroize v1.8.1
Compiling foreign-types v0.3.2
Compiling pem-rfc7468 v0.7.0
Compiling form_urlencoded v1.2.1
Compiling utf8parse v0.2.2
Compiling quote v1.0.37
Compiling minimal-lexical v0.2.1
Compiling native-tls v0.2.12
Compiling spin v0.9.8
Compiling paste v1.0.15
Compiling cpufeatures v0.2.13
Compiling concurrent-queue v2.5.0
Compiling der v0.7.9
Compiling syn v2.0.76
Compiling tracing-core v0.1.32
Compiling nom v7.1.3
Compiling unicode_categories v0.1.1
Compiling parking v2.2.0
Compiling equivalent v1.0.1
Compiling openssl-probe v0.1.5
Compiling crc-catalog v2.4.0
Compiling unicode-normalization v0.1.23
Compiling event-listener v5.3.1
Compiling crc v3.2.1
Compiling lazy_static v1.5.0
Compiling crossbeam-queue v0.3.11
Compiling memoffset v0.6.5
Compiling rustix v0.38.35
Compiling hex v0.4.3
Compiling num-bigint-dig v0.8.4
Compiling anstyle-parse v0.2.5
Compiling unicode-properties v0.1.2
Compiling colorchoice v1.0.2
Compiling anstyle-query v1.1.1
Compiling anstyle v1.0.8
Compiling is_terminal_polyfill v1.70.1
Compiling linux-raw-sys v0.4.14
Compiling idna v0.5.0
Compiling semver v1.0.23
Compiling bitflags v1.3.2
Compiling clap_lex v0.7.2
Compiling num-integer v0.1.46
Compiling atoi v2.0.0
Compiling getrandom v0.2.15
Compiling mio v1.0.2
Compiling openssl-sys v0.9.103
Compiling socket2 v0.5.7
Compiling libsqlite3-sys v0.30.1
Compiling dirs-sys-next v0.1.2
Compiling anstream v0.6.15
Compiling rand_core v0.6.4
Compiling crypto-common v0.1.6
Compiling block-buffer v0.10.4
Compiling num-iter v0.1.45
Compiling stringprep v0.1.5
Compiling digest v0.10.7
Compiling base64 v0.22.1
Compiling camino v1.1.9
Compiling strsim v0.11.1
Compiling url v2.5.2
Compiling unicode-width v0.1.13
Compiling spki v0.7.3
Compiling dotenvy v0.15.7
Compiling heck v0.5.0
Compiling sha2 v0.10.8
Compiling hmac v0.12.1
Compiling pkcs8 v0.10.2
Compiling signature v2.2.0
Compiling md-5 v0.10.6
Compiling hkdf v0.12.4
Compiling whoami v1.5.1
Compiling endian-type v0.1.2
Compiling pkcs1 v0.7.5
Compiling sha1 v0.10.6
Compiling clap_builder v4.5.15
Compiling dirs-next v2.0.0
Compiling nix v0.23.2
Compiling flume v0.11.0
Compiling unicode-segmentation v1.11.0
Compiling home v0.5.9
Compiling anyhow v1.0.86
Compiling instant v0.1.13
Compiling sqlformat v0.2.4
Compiling iana-time-zone v0.1.60
Compiling console v0.15.8
Compiling filetime v0.2.25
Compiling chrono v0.4.38
Compiling glob v0.3.1
Compiling serde_derive v1.0.209
Compiling zerocopy-derive v0.7.35
Compiling tokio-macros v2.4.0
Compiling futures-macro v0.3.30
Compiling thiserror-impl v1.0.63
Compiling openssl-macros v0.1.1
Compiling tracing-attributes v0.1.27
Compiling clap_derive v4.5.13
Compiling async-trait v0.1.81
Compiling tokio v1.39.3
Compiling zerocopy v0.7.35
Compiling futures-util v0.3.30
Compiling tracing v0.1.40
Compiling ppv-lite86 v0.2.20
Compiling hashbrown v0.14.5
Compiling clap v4.5.16
Compiling clap_complete v4.5.24
Compiling rand_chacha v0.3.1
Compiling rand v0.8.5
Compiling hashlink v0.9.1
Compiling indexmap v2.4.0
Compiling futures-executor v0.3.30
Compiling futures v0.3.30
Compiling tokio-stream v0.1.15
Compiling backoff v0.4.0
error: rustc interrupted by SIGSEGV, printing backtrace
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-9694e87df53074e6.so(+0x3560663)[0x797a08960663]
/lib/x86_64-linux-gnu/libc.so.6(+0x45320)[0x797a05045320]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.82.0-nightly(_ZN4llvm15LowerDbgDeclareERNS_8FunctionE+0x11e)[0x797a03025da4]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.82.0-nightly(_ZN4llvm15InstCombinePass3runERNS_8FunctionERNS_15AnalysisManagerIS1_JEEE+0x80f)[0x797a03023c0f]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.82.0-nightly(+0x60233e7)[0x797a030233e7]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.82.0-nightly(_ZN4llvm11PassManagerINS_8FunctionENS_15AnalysisManagerIS1_JEEEJEE3runERS1_RS3_+0xb7e)[0x797a0301e398]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.82.0-nightly(_ZN4llvm27ModuleToFunctionPassAdaptor3runERNS_6ModuleERNS_15AnalysisManagerIS1_JEEE+0x374)[0x797a03015eb4]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.82.0-nightly(+0x6015b31)[0x797a03015b31]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.82.0-nightly(_ZN4llvm11PassManagerINS_6ModuleENS_15AnalysisManagerIS1_JEEEJEE3runERS1_RS3_+0x229)[0x797a03785b69]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-9694e87df53074e6.so(LLVMRustOptimize+0x83c)[0x797a0ae87e48]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-9694e87df53074e6.so(+0x5a83be4)[0x797a0ae83be4]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-9694e87df53074e6.so(+0x5a83717)[0x797a0ae83717]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-9694e87df53074e6.so(_RNvXs1_Cs1ZUgZgSYsyZ_18rustc_codegen_llvmNtB5_18LlvmCodegenBackendNtNtNtCsjBMixsTToLf_17rustc_codegen_ssa6traits5write19WriteBackendMethods13optimize_thin+0x61d)[0x797a0aceba33]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-9694e87df53074e6.so(+0x59b02b6)[0x797a0adb02b6]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-9694e87df53074e6.so(+0x59af8a1)[0x797a0adaf8a1]
/home/lothrop/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-9694e87df53074e6.so(+0x59aedab)[0x797a0adaedab]
/lib/x86_64-linux-gnu/libc.so.6(+0x9ca94)[0x797a0509ca94]
/lib/x86_64-linux-gnu/libc.so.6(+0x129c3c)[0x797a05129c3c]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=16777216
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
Couldn't recreate
```
</p>
</details>
| A-LLVM,I-ICE,T-compiler,C-bug,S-needs-repro | low | Critical |
2,491,098,739 | go | x/mobile,runtime: missing stack trace on Android crash | ### Go version
go version go1.22.6 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/brien/Library/Caches/go-build'
GOENV='/Users/brien/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/brien/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/brien/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.22.6'
GCCGO='gccgo'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/r3/v_3z60rx2cxg0s1r9tl9fbmw0000gn/T/go-build4289722356=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Create a go module and build it with gomobile. For example, the build command below will produce an AAR file that can be bundled into an Android app.
```
cd /path/to/my/package
gomobile bind \
-target=android/arm64 -androidapi 24 \
-javapkg my.package \
-trimpath \
-gcflags="-dwarf=true" \
-ldflags="-compressdwarf=false -B gobuildid" \
-o build/android/MyPackage.aar \
package/name
```
The build AAR can be bundled into an Android app with the build.gradle lines:
```
dependencies {
compileOnly fileTree(dir: "/path/to/my/package/build/android", include: ['*-sources.jar'])
implementation fileTree(dir: "/path/to/my/package/build/android", include: ['*.aar', '*.jar'])
}
```
### What did you see happen?
Now if there is a crash from a goroutine inside the library, the following is printed to the Android logs. e.g. a nil pointer deference
```
2024-08-27 22:33:02.495 22477-22547 AndroidClassName my.package I init
2024-08-27 22:33:02.497 22477-0 Go my.package E panic: runtime error: invalid memory address or nil pointer dereference
--------- beginning of crash
2024-08-27 22:33:02.497 22477-0 Go my.package E [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x6f9cee6a0c]
2024-08-27 22:33:02.497 22477-22537 libc my.package A Fatal signal 6 (SIGABRT), code -6 (SI_TKILL) in tid 22537 (y.package), pid 22477 (y.package)
```
### What did you expect to see?
I would expect to see a full stack trace similar to what happens when a go binary crashes. This is much more useful for crash logging and debugging.
| help wanted,NeedsInvestigation,mobile,compiler/runtime | low | Critical |
2,491,149,463 | node | Adding TCP_FASTOPEN_CONNECT prevents compiling on older systems | ### Version
v22.7.0
### Platform
```text
Linux static1 3.19.4-1.g51ddeac-desktop #1 SMP PREEMPT Mon Apr 13 13:20:55 UTC 2015 (51ddeac) x86_64 GNU/Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Attempt to compile nodejs on an older system that lacks support for TCP_FASTOPEN_CONNECT.
### How often does it reproduce? Is there a required condition?
All the time.
### What is the expected behavior? Why is that the expected behavior?
Be able to compile/build nodejs from source.
### What do you see instead?
error: ‘TCP_FASTOPEN_CONNECT’ undeclared (first use in this function)
Multiple instances of this
### Additional information
Please implement an option to disable this feature if the target platform does not support it.
Also, read this: https://blog.apnic.net/2021/07/05/tcp-fast-open-not-so-fast/ | cares | low | Critical |
2,491,213,595 | opencv | Parallel support for filter engine | ### Describe the feature and motivation
The role of filter engine is to deal with border and apply row and column filter or a 2D filter. Current logic is allocating a (kernel_height + 3) * (image_width +border_width) buffer, copying (kernel_height + 3) lines to buffer, adding borders according to the border type, applying row filter to those lines and applying column filter. Then add next 3 lines rollingly and repeat. The benefit is to save memory. The disadvantage is unable to add parallel logic.
A special case of GaussianBlur takes advantage of parallel when type is `uint8_t` or `uint16_t` and source matrix isn't a submatrix or `BORDER_ISOLATED`. In `fixedSmoothInvoker`, every thread calculates lines assigned to it plus (kernel_height - 1) lines, which causes duplicate calculation.
So, why not adding a buffer to store whole image with border and applying the filter in parallel? Takes more memory and less time.
Related discussion: https://github.com/opencv/opencv/issues/11012
### Additional context
_No response_ | optimization,feature | low | Minor |
2,491,226,560 | flutter | [SelectionArea] The height of highlight is different when english/non-english characters are used together | ### Steps to reproduce
Run sample code.
### Related issues:
https://github.com/flutter/flutter/issues/54935
### Expected results
The height of highlight should be the same.
<img width="396" alt="image" src="https://github.com/user-attachments/assets/c7df0818-5b7a-42d9-a0af-4833457f61eb">
### Actual results
The height of highlight is different.
<img width="392" alt="image" src="https://github.com/user-attachments/assets/23e416d4-caf2-499c-bef5-6ba5679a9996">
### Code sample
<details open><summary>Code sample</summary>
```dart
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: const Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
SelectionArea(
child: Text('abc啊啊啊'),
)
],
),
),
);
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="578" alt="截屏2024-08-28 14 42 20" src="https://github.com/user-attachments/assets/8ca57271-3c8d-4f9b-a9b8-d702343adbf1">
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel [user-branch], 3.22.0-39.0.pre.2, on macOS 14.4 23E214 darwin-arm64, locale zh-Hans-CN)
! Flutter version 3.22.0-39.0.pre.2 on channel [user-branch] at /Users/yangjiakang/flutter/framework
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at
https://flutter.dev/docs/get-started/install.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
• Framework revision d02292dbc4 (3 个月前), 2024-05-20 21:25:37 -0700
• Engine revision c2ef01f6f1
• Dart version 3.5.0 (build 3.5.0-172.0.dev)
• DevTools version 2.36.0-dev.10
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly
to perform update checks and upgrades.
```
</details>
| framework,f: material design,a: internationalization,has reproducible steps,P2,f: selection,team-design,triaged-design,found in release: 3.24,found in release: 3.25 | low | Critical |
2,491,262,018 | PowerToys | Use "Remove display from desktop" for arbitrary Apps | ### Description of the new feature / enhancement
Windows 10 (and 11) have the following switch hidden in the extended display settings:

As far as I know, this makes a display that is connected to your computer available exclusively for certain apps. Unfortunately, I was not able to find any app making use of this, so I could not really test it. Apparently it needs some special API calls integrated into the applications code.
My idea would be to somehow make PowerToys able to "wrap" an application in order to make it able to use this feature.
### Scenario when this would be used?
I myself have three normal monitors as well as a large wall display connected to my computer. I would like to switch my large display into this mode and have e.g. Edge use it in fullscreen mode to display a dashboard.
### Supporting information
Maybe https://learn.microsoft.com/en-us/windows/win32/gdi/multiple-display-monitors, but I find it hard to find concrete information on how to make use of such displays. | Needs-Triage | low | Minor |
2,491,276,963 | vscode | CodeLens - Actual Command Not Found Error | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
We are developing an extension using `CodeLensProvider`, and we want to refresh our lenses every 5 seconds. To achieve this, we use `onDidChangeCodeLenses` and fire it with `setInterval`.
After `onDidChangeCodeLenses` runs, and `resolveCodeLens` doesn't respond fast enough, the following error pops up:
<img width="450" alt="image" src="https://github.com/user-attachments/assets/8ea8b82a-cbe7-46da-b212-ee3526cf56df">
Does this issue occur when all extensions are disabled?: **Yes**/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: Darwin arm64 23.4.0
Steps to Reproduce:
1. Use the sample [CodeLens](https://github.com/microsoft/vscode-extension-samples/blob/main/codelens-sample/src/CodelensProvider.ts) example.
2. In the provider constructor, add:
`setInterval(() => { this._onDidChangeCodeLenses.fire(); }, 5000);`
3. In `resolveCodeLens`, change the function to `async` and add:
`await new Promise((resolve) => setTimeout(resolve, 3000));`
4. Run the extension.
5. Open a file which is scrollable.
6. Wait for the CodeLens to show.
7. Scroll down and wait 5 seconds.
8. Scroll back up and click on a lens (faster than 3 seconds).
https://github.com/user-attachments/assets/5c4a9e6d-efce-4272-a225-07e686cc622f
Changing `scheduler.schedule()` in this [line](https://github.com/microsoft/vscode/blob/fb831a6a73f7c94630a5184a63daf26903ac8d97/src/vs/editor/contrib/codelens/browser/codelensController.ts#L163) to `this._onModelChange()` fixed it for me, although I'm not sure that's the correct solution because it makes the lens unclickable.
Would appreciate your support
| bug,code-lens | low | Critical |
2,491,294,403 | yt-dlp | Add Merit+ | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
USA
### Example URLs
https://www.meritplus.com/c/s/VQ2aB6Sp?episodeId=uNLp2Rgg&play=1
### Provide a description that is worded well enough to be understood
After a yt-dlp pull nightly release, I attempt to download https://www.meritplus.com/c/s/VQ2aB6Sp?episodeId=uNLp2Rgg only to received Error message site unsupported. I'm able to download with other tools, so I don't believe it is DRM.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.meritplus.com/c/s/VQ2aB6Sp?episodeId=Zly7sCAP']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds [41be32e78] (pip)
[debug] Python 3.10.5 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1n 15 Mar 2022)
[debug] exe versions: ffmpeg 6.1-full_build-www.gyan.dev (setts), ffprobe 6.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2023.07.22, mutagen-1.47.0, requests-2.32.3, sqlite3-3.37.2, urllib3-2.0.7, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1831 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://www.meritplus.com/c/s/VQ2aB6Sp?episodeId=Zly7sCAP
[generic] VQ2aB6Sp?episodeId=Zly7sCAP: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] VQ2aB6Sp?episodeId=Zly7sCAP: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.meritplus.com/c/s/VQ2aB6Sp?episodeId=Zly7sCAP
Traceback (most recent call last):
File "C:\Users\xxxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
File "C:\Users\xxxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
File "C:\Users\xxxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\yt_dlp\extractor\common.py", line 740, in extract
ie_result = self._real_extract(url)
File "C:\Users\xxxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\yt_dlp\extractor\generic.py", line 2526, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.meritplus.com/c/s/VQ2aB6Sp?episodeId=Zly7sCAP
```
| site-request,account-needed | low | Critical |
2,491,335,841 | next.js | Wrong segment rendered when quickly reload page and navigate through browser history in Next.js 13+. | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/snowy-platform-7k39wy
### To Reproduce
1. Run the preview of provided SandBox code.
2. Open it in a separate browser tab.
3. Navigate between `Page 1` and `Page 2` routes to create `window.history records` to be able to go back via `browser arrow`.
4. Try to `reload` page via `browser reload button` or `keyboard` and `go back` to `previous route`(make it as fast as possible)
5. If you did it fast enough you can see `updated URL` and the content from previous page (`segment`) .
6. If you did it not fast enough you can try it again from `step 3`.
### Current vs. Expected behavior
**Actual result:** page content is still from previous route and URL is correct.
**Expected result:** page content where I go back and URL must be correct.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.0-canary.132 // Latest available version is detected (15.0.0-canary.132).
eslint-config-next: N/A
react: 19.0.0-rc-eb3ad065-20240822
react-dom: 19.0.0-rc-eb3ad065-20240822
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation, Pages Router
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
That issue exists in all `Next.js 13+` that I tried to reproduce bug.
- Our QA firstly found it in our self-deploy website, then I checked official Next.js examples and official Vercel website(I suppose it uses last Next.js version). Also I tried Sandbox with minimal app configuration. The problem was reproduced on each case.
- Then I tried to reproduce it in our self-deploy project that uses `Next.js 12` and the issue doesn't exist there.
[Here is the screen record from official Vercel website that also reproduces that issue.](https://www.loom.com/share/af8dfdedf0f04b018aa26941db76214d?sid=78e7f58d-5676-4162-81e7-74f9e42b1959)
**Current way to fix.**
To fix that bug, I check for actual rendered segment, then check for what actually I have in URL and use `router.push` to URL with rendered segment. As I tested it's not possible to correctly change segment according to URL but it's possible to change URL according to rendered segment. But you need to configure that logic manually for a lot of segments so it's not the best option of course but at least we show correct content and URL for user.
| bug,Navigation,Pages Router | low | Critical |
2,491,395,685 | neovim | with 'autoindent', cursor moves to wrong position virtualedit=all and ^ mark | ### Problem
https://github.com/user-attachments/assets/556a006f-4d29-42df-842f-8f31d15501e7
### Steps to reproduce
1. prepare two files:
`debug.vim`
```
set virtualedit=all
autocmd InsertLeave * :normal `^
```
`abc.txt`
```
vim and emacs
```
2. run command `nvim -u debug.vim abc.txt`
3. move cursor to "s" in "emacs", press `i<CR><Esc>`
### Expected behavior
the cursor should be positioned at letter `s`, but it's actually on the right of `s`
vim's behaviour is correct
### Neovim version (nvim -v)
NVIM v0.10.1
### Vim (not Nvim) behaves the same?
NONONO 9.1
### Operating system/version
ArchLinux 6.10.6-arch1-1
### Terminal name/version
st 0.8.5
### $TERM environment variable
st-256color
### Installation
AUR | bug-vim,marks,insert-mode | low | Critical |
2,491,433,636 | ollama | actively retrieves the content returned from the web page | expected that ollama can automatically identify the model, and then when the problem exceeds the capacity of the model, ollama actively retrieves the content returned from the web page to the model, and the model analyzes the content returned and finally gives the answer. | feature request | low | Minor |
2,491,470,968 | stable-diffusion-webui | [Bug]: load sdxl inpaint model wrong | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
In txt2img interface, I added inpaint in ControlNet and select sd_xl_base_1.0_inpainting_0.1.safetensors, but an error message appeared "[ControlNet Error] Cannot recognize the ControlModel !" My checkpoint model is SDXL1.0/1.5.
### Steps to reproduce the problem
1. txt2img
2. controlnet
3. inpaint
### What should have happened?
normal result
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-08-28-08-37.json](https://github.com/user-attachments/files/16780625/sysinfo-2024-08-28-08-37.json)
### Console logs
```Shell
Loading weights [024c141c50] from E:\5_AI\sd-webui-aki-v4.8\models\Stable-diffusion\sdxl10ArienmixxlAsian_v45Pruned.safetensors
Applying attention optimization: xformers... done.
Weights loaded in 39.8s (send model to cpu: 2.9s, calculate hash: 33.8s, apply weights to model: 1.3s, move model to device: 1.5s).
2024-08-28 15:57:15,811 - ControlNet - INFO - unit_separate = False, style_align = False
2024-08-28 15:57:16,056 - ControlNet - INFO - Loading model: sd_xl_base_1.0_inpainting_0.1 [5679a81a]
2024-08-28 15:57:16,489 - ControlNet - INFO - Loaded state_dict from [E:\5_AI\sd-webui-aki-v4.8\models\ControlNet\sd_xl_base_1.0_inpainting_0.1.safetensors]
*** Error running process: E:\5_AI\sd-webui-aki-v4.8\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "E:\5_AI\sd-webui-aki-v4.8\modules\scripts.py", line 832, in process
script.process(p, *script_args)
File "E:\5_AI\sd-webui-aki-v4.8\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1228, in process
self.controlnet_hack(p)
File "E:\5_AI\sd-webui-aki-v4.8\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1213, in controlnet_hack
self.controlnet_main_entry(p)
File "E:\5_AI\sd-webui-aki-v4.8\extensions\sd-webui-controlnet\scripts\controlnet.py", line 919, in controlnet_main_entry
model_net, control_model_type = Script.load_control_model(p, unet, unit.model)
File "E:\5_AI\sd-webui-aki-v4.8\extensions\sd-webui-controlnet\scripts\controlnet.py", line 436, in load_control_model
control_model = Script.build_control_model(p, unet, model)
File "E:\5_AI\sd-webui-aki-v4.8\extensions\sd-webui-controlnet\scripts\controlnet.py", line 465, in build_control_model
control_model = build_model_by_guess(state_dict, unet, model_path)
File "E:\5_AI\sd-webui-aki-v4.8\extensions\sd-webui-controlnet\scripts\controlnet_model_guess.py", line 292, in build_model_by_guess
raise Exception('[ControlNet Error] Cannot recognize the ControlModel!')
Exception: [ControlNet Error] Cannot recognize the ControlModel!
```
### Additional information
_No response_ | bug-report | low | Critical |
2,491,472,091 | flutter | [webview_flutter][iOS]: Scrolling of ListTile inside webview gets stuck after closing modalBottomSheet. | ### Steps to reproduce
1. Run sample code on an ios device
2. Click any listTile in first page, wait the second page(A page use webview_flutter displaying a flutter web project) render finished, and then try to swipe back to middle and cancel, now we still at the second page
3. Scroll or click the second page, the page scroll stange and not response any click event
### Expected results
The flutter web page should scroll and response to click event as normal
### Actual results
The flutter web page scroll stange and not response any click event
### Code sample
<details open><summary>Code sample</summary>
https://github.com/Samaritan123/ios_webview
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/903b30c0-3f69-499d-83c7-434bd56f4c48
</details>
I build an ios version and web version of the code sample, and open the web version with webview_flutter in listTile.
The first page is normal flutter app page, the second page use webview_flutter to display the web version of the code sample.
As we can see, when we haven't try to swipe back, the second page works well. When we try to swipe back and cancel, the second page behaive strange.
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.22.3, on macOS 14.4.1 23E224 darwin-arm64
(Rosetta), locale zh-Hans-CN)
• Flutter version 3.22.3 on channel stable at
/Users/kailiangtang-M-YQ6NV/srv/flutter/flutter-3.22.3
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision b0850beeb2 (6 weeks ago), 2024-07-16 21:43:41 -0700
• Engine revision 235db911ba
• Dart version 3.4.4
• DevTools version 2.34.3
• Pub download mirror https://pub.flutter-io.cn
• Flutter download mirror https://storage.flutter-io.cn
[!] Android toolchain - develop for Android devices (Android SDK version 31.0.0)
• Android SDK at /Users/kailiangtang-M-YQ6NV/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/docs/get-started/install/macos#android-setup for
more details.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.14.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 4.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 11.0.8+10-b944.6916264)
[✓] VS Code (version 1.89.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.94.0
```
</details>
| platform-ios,framework,f: scrolling,p: webview,package,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.25 | low | Major |
2,491,504,506 | material-ui | [docs-infra] Remove `https://mui.com/` from API descriptions links | > In the future, I think that we want to automatically remove `https://mui.com/` from those links to have relative URLs.
_Originally posted by @oliviertassinari in https://github.com/mui/material-ui/pull/43472#discussion_r1733661761_
**Search keywords**: | dx,enhancement,scope: docs-infra | low | Minor |
2,491,570,992 | pytorch | Error in torch.export for torch.ops.aten.chunk for dynamic shape | ### 🐛 Describe the bug
While exporting a model with `torch.ops.aten.chunk` it decomposes into` torch.ops.aten.split.Tensor` in `torch.export`. With dynamic inputs the below code-
```
import torch
import torch_tensorrt
class TestChunk(torch.nn.Module):
def forward(self, input):
out = torch.ops.aten.chunk.default(input, 3, 0)
return out
inputs = [torch.randn(3)]
inputs_zero_shape = torch.export.Dim("shape", min=1, max=3)
dynamic_shapes = [[torch.export.Dim("shape", min=1, max=3)]]
exp_program = torch.export.export(TestChunk(), tuple(inputs), dynamic_shapes=dynamic_shapes)
trt_gm = torch_tensorrt.dynamo.compile(exp_program, inputs)
# Run inference
trt_gm(*inputs)
```
Fails in torch.export. The error is the following
```
Traceback (most recent call last):
File "/code/torch_trt/TensorRT/tests/py/dynamo/conversion/split_dynamic.py", line 14, in <module>
exp_program = torch.export.export(TestChunk(), tuple(inputs), dynamic_shapes=dynamic_shapes)
File "/root/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/export/__init__.py", line 172, in export
return _export(
File "/root/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/export/_trace.py", line 1013, in wrapper
raise e
File "/root/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/export/_trace.py", line 986, in wrapper
ep = fn(*args, **kwargs)
File "/root/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/export/exported_program.py", line 97, in wrapper
return fn(*args, **kwargs)
File "/root/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/export/_trace.py", line 1921, in _export
export_artifact = export_func( # type: ignore[operator]
File "/root/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/export/_trace.py", line 1220, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/root/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/export/_trace.py", line 1248, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/root/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/export/_trace.py", line 572, in _export_to_torch_ir
raise UserError(UserErrorType.CONSTRAINT_VIOLATION, str(e)) # noqa: B904
torch._dynamo.exc.UserError: Constraints violated (shape)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of shape = L['input'].size()[0] in the specified range shape <= 3 satisfy the generated guard Eq(((L['input'].size()[0] + ((L['input'].size()[0] + 2)//3) - 1)//(((L['input'].size()[0] + 2)//3))), 3).
Specializations unexpectedly required (shape)! For more information, run with TORCH_LOGS="+dynamic".
- solving the guards generated for shape = L['input'].size()[0] resulted in a specialized value of 3.
Suggested fixes:
shape = 3
```
It seems to fail in the torch.export guard. Could someone please look into this?
### Versions
Torch- 2.5.0.dev20240827+cu124
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,module: dynamic shapes,oncall: export | low | Critical |
2,491,588,877 | flutter | [Android] CameraX preview is rotated 90 degrees | ### What package does this bug report belong to?
camera
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.yaml</summary>
```yaml
name: camera_test
description: "A new Flutter project."
publish_to: 'none'
version: 0.1.0
environment:
sdk: ^3.5.1
dependencies:
flutter:
sdk: flutter
camera: ^0.11.0+2
dev_dependencies:
flutter_test:
sdk: flutter
flutter_lints: ^4.0.0
flutter:
uses-material-design: true
```
</details>
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
camera:
dependency: "direct main"
description:
name: camera
sha256: "26ff41045772153f222ffffecba711a206f670f5834d40ebf5eed3811692f167"
url: "https://pub.dev"
source: hosted
version: "0.11.0+2"
camera_android_camerax:
dependency: transitive
description:
name: camera_android_camerax
sha256: "7cd93578ad201dcc6bb5810451fb00d76a86bab9b68dceb68b8cbd7038ac5846"
url: "https://pub.dev"
source: hosted
version: "0.6.8+3"
camera_avfoundation:
dependency: transitive
description:
name: camera_avfoundation
sha256: "7c28969a975a7eb2349bc2cb2dfe3ad218a33dba9968ecfb181ce08c87486655"
url: "https://pub.dev"
source: hosted
version: "0.9.17+3"
camera_platform_interface:
dependency: transitive
description:
name: camera_platform_interface
sha256: b3ede1f171532e0d83111fe0980b46d17f1aa9788a07a2fbed07366bbdbb9061
url: "https://pub.dev"
source: hosted
version: "2.8.0"
camera_web:
dependency: transitive
description:
name: camera_web
sha256: "595f28c89d1fb62d77c73c633193755b781c6d2e0ebcd8dc25b763b514e6ba8f"
url: "https://pub.dev"
source: hosted
version: "0.3.5"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
cross_file:
dependency: transitive
description:
name: cross_file
sha256: "7caf6a750a0c04effbb52a676dce9a4a592e10ad35c34d6d2d0e4811160d5670"
url: "https://pub.dev"
source: hosted
version: "0.3.4+2"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_plugin_android_lifecycle:
dependency: transitive
description:
name: flutter_plugin_android_lifecycle
sha256: "9ee02950848f61c4129af3d6ec84a1cfc0e47931abc746b03e7a3bc3e8ff6eda"
url: "https://pub.dev"
source: hosted
version: "2.0.22"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
stream_transform:
dependency: transitive
description:
name: stream_transform
sha256: "14a00e794c7c11aa145a170587321aedce29769c08d7f58b1d141da75e3b1c6f"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
web:
dependency: transitive
description:
name: web
sha256: d43c1d6b787bf0afad444700ae7f4db8827f701bc61c255ac8d328c6f4d52062
url: "https://pub.dev"
source: hosted
version: "1.0.0"
sdks:
dart: ">=3.5.1 <4.0.0"
flutter: ">=3.24.0"
```
</details>
### Steps to reproduce
1. Open the application
2. Confirm camera permissions
3. See improperly rotated preview
### Expected results
The camera preview should be properly rotated.
### Actual results
The camera preview is rotated 90º.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:camera/camera.dart';
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
void main() async {
runApp(const MainApp());
}
class MainApp extends StatefulWidget {
const MainApp({super.key});
@override
State<MainApp> createState() => _MainAppState();
}
class _MainAppState extends State<MainApp> {
final _runner = _Runner();
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: ValueListenableBuilder(
valueListenable: _runner,
builder: _build,
),
),
);
}
Widget _build(BuildContext context, _State value, Widget? child) {
switch (value) {
case _Loaded():
return Center(
child: CameraPreview(value.controller),
);
case _Simple.initializing:
return const Center(
child: CircularProgressIndicator(),
);
case _Simple.error:
return const Center(
child: Icon(Icons.error),
);
}
}
}
class _Runner with ChangeNotifier implements ValueListenable<_State> {
_Runner() {
_init();
}
void _init() async {
final available = await availableCameras();
if (available.isEmpty) {
_emit(_Simple.error);
} else {
final controller =
CameraController(available.first, ResolutionPreset.veryHigh);
try {
await controller.initialize();
_emit(_Loaded(
controller: controller,
available: available,
));
} catch (e) {
_emit(_Simple.error);
}
}
}
_State _value = _Simple.initializing;
@override
_State get value => _value;
void _emit(_State state) {
_value = state;
notifyListeners();
}
}
sealed class _State {}
enum _Simple implements _State {
initializing,
error,
}
class _Loaded implements _State {
final CameraController controller;
final List<CameraDescription> available;
const _Loaded({
required this.controller,
required this.available,
});
}
```
</details>
### Screenshots or Videos
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on macOS 14.6.1 23G93 darwin-arm64, locale en-SI)
• Flutter version 3.24.1 on channel stable at /Users/.../Development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (8 days ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/.../Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.93.0-insider)
• VS Code at /Applications/Visual Studio Code - Insiders.app/Contents
• Flutter extension version 3.94.0
[✓] Connected device (4 available)
• Pixel (mobile) • FA76R0301797 • android-arm64 • Android 10 (API 29)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.122
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
**Additional:**
This issue reproduces on a Pixel 1 device. I wasn't able to test this on a more capable device.
The issue first occurs with the `camera_android_camerax: 0.6.8+2` if I override the dependency to: `camera_android_camerax: 0.6.7+2` everything works as expected. | e: device-specific,platform-android,p: camera,package,P1,team-android | medium | Critical |
2,491,621,055 | pytorch | Questions about CVE-2022-3171, CVE-2022-3509 and CVE-2022-3510 | ### 🐛 Describe the bug
Description
Summary
The version of protobuf in .github/requirements/pip-requirements-macOS.txt is 3.20.2, this version of protobuf contains vulnerabilities
CVE-2022-3171, CVE-2022-3509 and CVE-2022-3510, which may pose security and performance risks to the PyTorch project.
Details
CVE-2022-3171
Severity: Medium
Url: https://www.cve.org/CVERecord?id=CVE-2022-3171
Description: A parsing issue with binary data in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields causes objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above.
Impact: If the PyTorch project uses the affected version of Protobuf and processes maliciously crafted messages during data serialization/deserialization, it could lead to prolonged pauses during garbage collection, affecting performance and potentially making the service unavailable.
CVE-2022-3509
Severity: High
Url: https://www.cve.org/CVERecord?id=CVE-2022-3509
Description: A parsing issue similar to https://github.com/advisories/GHSA-h4h5-3hr4-j3g2, but with textformat in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields causes objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above.
Impact: If PyTorch processes Protobuf data in text format containing maliciously crafted messages, it may cause abnormal garbage collection behavior, affecting system stability and performance, especially in scenarios where large volumes of Protobuf data are handled.
CVE-2022-3510
Severity: High
Url: https://www.cve.org/CVERecord?id=CVE-2022-3510
Description: A parsing issue similar to https://github.com/advisories/GHSA-h4h5-3hr4-j3g2, but with Message-Type Extensions in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields causes objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above.
Impact: If PyTorch uses the affected Protobuf version and processes maliciously crafted messages with extension fields, it could lead to garbage collection issues, affecting system stability.
| module: onnx,module: protobuf,triaged,module: third_party,security | low | Critical |
2,491,623,659 | PowerToys | Global sort modifier correctly modifies order but NOT selection | ### Microsoft PowerToys version
0.83.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Set a 'Global sort order modifier' for Windows search leaving all others at their default zero to make Windows search be the primary use function of PowerToys Run.
### ✔️ Expected Behavior
Results from Windows search should be listed at the top of the list **and closest match should be 'auto-selected'**.
### ❌ Actual Behavior
Results from Windows search are correctly listed at the top of the list but **web search is selected even though it has a lower priority and is lower in the list.**

### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,491,632,161 | PowerToys | Mouse Utilities: Find My Mouse: Shake Mouse to display a much larger size of cursor and then resize to normal. | ### Description of the new feature / enhancement
Mouse Utilities: Find My Mouse: Shake Mouse
When I shake the mosue, I prefer to display a much larger size of cursor and then resize to normal automatically.
I don't like the default behavior of "find my mouse". Now I have to extra click to dispear the effect of dark screen. I prefer no extra action to find my mouse after I shake the mouse.
thanks
### Scenario when this would be used?
find my mouse.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,491,678,124 | godot | `EditorDebuggerPlugin` does not receive messages or respond to requests. | ### Tested versions
- Reproduceable in Godot 4.3
### System information
Windows 10
### Issue description
Following the official [example](https://docs.godotengine.org/en/stable/classes/class_editordebuggerplugin.html#class-editordebuggerplugin-private-method-capture) on how to use the `EditorDebuggerPlugin` I stumbled across some problems.
- The message was only received whenever I resaved the debugger script.
- The message was never received, but it didn't push a warning for unknown message.
### Steps to reproduce
1. Open the MRP.
2. Run the example scene
3. It should print("Pong"), but it doesn't.
I wasn't able to reproduce the bug of having to resave the file. Can provided full project file, if needed.
### Minimal reproduction project (MRP)
[editordebuggerbugmrp.zip](https://github.com/user-attachments/files/16781954/editordebuggerbugmrp.zip)
| bug,topic:editor | low | Critical |
2,491,686,168 | stable-diffusion-webui | [Bug]: Inpaint Sketch not working | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Whenever I try to use inpaint sketch, it doesn't work and instead sends this on my console,

In this example I'm using a picture taken from my phone but this is what happens not matter what picture I use, whether it's from my phone, PC or the internet.
In the logs I attempted it multiple times.
### Steps to reproduce the problem
1.Open WebUI
2.Go to img2img
3.Go to inpaint sketch
4.Insert img and paint it
5.Press generate
6.It doesn't work and sends this message
### What should have happened?
It should normally generate the picture
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
[sysinfo-2024-08-28-10-19.json](https://github.com/user-attachments/files/16781939/sysinfo-2024-08-28-10-19.json)
### Console logs
```Shell
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --no-half-vae
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 2.0.1+cu118.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.6.0, num models: 10
CivitAI Browser+: Aria2 RPC started
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [059934ff58] from C:\Users\Vag\stable-diffusion-webui\models\Stable-diffusion\ponyRealism_v21VAE.safetensors
Creating model from config: C:\Users\Vag\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 28.6s (prepare environment: 5.6s, import torch: 4.6s, import gradio: 2.2s, setup paths: 2.2s, initialize shared: 0.3s, other imports: 1.6s, load scripts: 2.2s, create ui: 5.3s, gradio launch: 2.9s, app_started_callback: 1.8s).Applying attention optimization: Doggettx... done.
Model loaded in 14.4s (load weights from disk: 0.6s, create model: 0.7s, apply weights to model: 12.4s, apply half(): 0.1s, calculate empty prompt: 0.4s).
Traceback (most recent call last):
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1429, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1239, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\components\image.py", line 270, in preprocess
assert isinstance(x, dict)
AssertionError
Traceback (most recent call last):
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\Users\Vag\stable-diffusion-webui\modules\ui.py", line 560, in update_orig
has_exact_match = np.any(np.all(np.array(image) == np.array(state), axis=-1))
ValueError: operands could not be broadcast together with shapes (151,84,3) (910,512,3)
Traceback (most recent call last):
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\Users\Vag\stable-diffusion-webui\modules\ui.py", line 560, in update_orig
has_exact_match = np.any(np.all(np.array(image) == np.array(state), axis=-1))
ValueError: operands could not be broadcast together with shapes (24,12,3) (910,512,3)
Traceback (most recent call last):
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1429, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1239, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\components\image.py", line 270, in preprocess
assert isinstance(x, dict)
AssertionError
Traceback (most recent call last):
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1429, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1239, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\components\image.py", line 270, in preprocess
assert isinstance(x, dict)
AssertionError
Traceback (most recent call last):
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1429, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1239, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "C:\Users\Vag\stable-diffusion-webui\venv\lib\site-packages\gradio\components\image.py", line 270, in preprocess
assert isinstance(x, dict)
AssertionError
```
### Additional information
_No response_ | bug-report | low | Critical |
2,491,693,698 | vscode | notebookDocument/didChange event's `notebook.version` contains previous notebook version (which means 0 is sent on the first edit) | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: OS X
Steps to Reproduce:
1. Open a notebook document, like extensions/vscode-api-tests/testWorkspace/test.ipynb
2. Inspect the LSP notebookDocument/didOpen event, which contains `notebook.version: 0` - this is correct and expected.
3. Make a change like adding a space to a cell.
4. Inspect the LSP notebookDocument/didChange event, which still contains `notebook.version: 0` - this is a protocol violation. The [spec](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#notebookDocument_synchronization) says:
```
/**
* The version number of this document (it will increase after each
* change, including undo/redo).
*/
version: [integer](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#integer);
```
Because of the "didChange", I believe the LSP message should come after the change, and have the up to date version (which would be 1 after the first change).
I dug in a bit to understand why this happens, and the version increment seems to happen here:
https://github.com/microsoft/vscode/blob/2a0e70d6c6226ef5021285b22cfbe07c929d64bc/src/vs/workbench/services/workingCopy/common/storedFileWorkingCopy.ts#L734
This is called from the listener installed here:
https://github.com/microsoft/vscode/blob/2a0e70d6c6226ef5021285b22cfbe07c929d64bc/src/vs/workbench/contrib/notebook/common/model/notebookCellTextModel.ts#L161
The LSP event is sent from a listener installed earlier, here: https://github.com/microsoft/vscode/blob/2a0e70d6c6226ef5021285b22cfbe07c929d64bc/src/vs/workbench/api/browser/mainThreadDocuments.ts#L96-L98
See how in line 97, the event has a separate version, which is indeed an already incremented version (in my case it is 2, not 1, which is a bit surprising). In extHostDocuments, that version is still accessible from the event, but then not sent on to _onDidChangeDocument in line 159ff: https://github.com/microsoft/vscode/blob/2a0e70d6c6226ef5021285b22cfbe07c929d64bc/src/vs/workbench/api/common/extHostDocuments.ts#L143-L171
I checked in the same location for a non-notebook document, and the `event.version` was equal to the `document.version` (it was also 2 for the first edit).
Do you agree this is a protocol violation for notebooks? If so, do you have an idea for how to best fix it?
| bug,notebook | low | Critical |
2,491,694,385 | opencv | fitEllipse returns a very far ellipse | ### System Information
OpenCV python version: 4.10.0
Operating System / Platform: Windows 11
Python version: 3.11.9
### Detailed description
When running `cv2.fitEllipse` on a the points below i got an ellipse that is very far and not related to them.
Subset of the points did not result with same problem.
<img src="https://github.com/user-attachments/assets/032c95f4-0124-405f-8362-f17db8500844" width=300/>
### Steps to reproduce
```python
import cv2
import numpy as np
import matplotlib.pyplot as plt
points = [
[1434, 308], [1434, 309], [1433, 310], [1427, 310], [1427, 312], [1426, 313], [1422, 313], [1422, 314],
[1421, 315], [1415, 315], [1415, 316], [1414, 317], [1408, 317], [1408, 319], [1407, 320], [1403, 320],
[1403, 321], [1402, 322], [1396, 322], [1396, 323], [1395, 324], [1389, 324], [1389, 326], [1388, 327],
[1382, 327], [1382, 328], [1381, 329], [1376, 329], [1376, 330], [1375, 331], [1369, 331], [1369, 333],
[1368, 334], [1362, 334], [1362, 335], [1361, 336], [1359, 336], [1359, 1016], [1365, 1016], [1366, 1017],
[1366, 1019], [1430, 1019], [1430, 1017], [1431, 1016], [1440, 1016], [1440, 308]
]
ellipse = cv2.fitEllipse(np.array(points))
points = np.array(points)
center, axes, angle = ellipse
ellipse_points = cv2.ellipse2Poly((int(center[0]), int(center[1])), (int(axes[0] // 2), int(axes[1] // 2)), int(angle), 0, 360, 1)
plt.figure(figsize=(8, 8))
plt.scatter(points[:, 0], points[:, 1], c='blue', label='Points')
ellipse_polygon = plt.Polygon(ellipse_points, fill=None, edgecolor='red', label='Fitted Ellipse')
plt.gca().add_patch(ellipse_polygon)
plt.gca().set_aspect('equal', adjustable='box')
plt.legend()
plt.show()
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug | low | Major |
2,491,695,345 | next.js | Middleware Unable to Access Updated Environment Variables Set Asynchronously in next.config.js | ### [Link to the code that reproduces this issue](https://codesandbox.io/p/devbox/nervous-rosalind-995cry?layout=%257B%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522rootPanelGroup%2522%253A%257B%2522direction%2522%253A%2522horizontal%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522id%2522%253A%2522ROOT_LAYOUT%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522cm0dp5eo000063b6omadk8h8z%2522%252C%2522sizes%2522%253A%255B70%252C30%255D%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522EDITOR%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522id%2522%253A%2522cm0dp5eo000023b6o1s7lqv4w%2522%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522SHELLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522id%2522%253A%2522cm0dp5eo000043b6oaoyfwx75%2522%257D%255D%252C%2522sizes%2522%253A%255B100%255D%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522DEVTOOLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522id%2522%253A%2522cm0dp5eo000053b6ockcxnmfq%2522%257D%255D%252C%2522sizes%2522%253A%255B100%255D%257D%255D%252C%2522sizes%2522%253A%255B50%252C50%255D%257D%252C%2522tabbedPanels%2522%253A%257B%2522cm0dp5eo000023b6o1s7lqv4w%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cm0dp5eo000013b6oema7ehj6%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522filepath%2522%253A%2522%252FREADME.md%2522%252C%2522state%2522%253A%2522IDLE%2522%257D%255D%252C%2522id%2522%253A%2522cm0dp5eo000023b6o1s7lqv4w%2522%252C%2522activeTabId%2522%253A%2522cm0dp5eo000013b6oema7ehj6%2522%257D%252C%2522cm0dp5eo000053b6ockcxnmfq%2522%253A%257B%2522id%2522%253A%2522cm0dp5eo000053b6ockcxnmfq%2522%252C%2522activeTabId%2522%253A%2522cm0dpfhnf002f3b6ny8a9p48o%2522%252C%2522tabs%2522%253A%255B%257B%2522type%2522%253A%2522DOCS%2522%252C%2522id%2522%253A%2522cm0dpfhnf002f3b6ny8a9p48o%2522%252C%2522mode%2522%253A%2522permanent%2522%257D%255D%257D%252C%2522cm0dp5eo000043b6oaoyfwx75%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cm0dp5eo000033b6odvt7emld%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522TASK_LOG%2522%252C%2522taskId%2522%253A%2522dev%2522%257D%255D%252C%2522id%2522%253A%2522cm0dp5eo000043b6oaoyfwx75%2522%252C%2522activeTabId%2522%253A%2522cm0dp5eo000033b6odvt7emld%2522%257D%257D%252C%2522showDevtools%2522%253Atrue%252C%2522showShells%2522%253Atrue%252C%2522showSidebar%2522%253Atrue%252C%2522sidebarPanelSize%2522%253A15%257D)
### To Reproduce
1. **Open the Sandbox:**
- Go to the provided CodeSandbox link containing the reproduction code.
2. **Start the Application:**
- Start the Next.js development server by running:
```bash
npm run dev
```
- This will launch the application in development mode.
4. **Observe Console Logs:**
- After the server starts, the `updateEnvVars` function in `next.config.js` will run asynchronously, and you'll see its console logs in the terminal.
- Wait for the server to finish starting up and loading the configurations.
5. **Make a Request to Trigger Middleware:**
- Open your browser and navigate to any page of the application (e.g., `http://localhost:3000`).
- This action will trigger the middleware function.
6. **Check the Terminal Output:**
- Look at the terminal where your development server is running. You should see the output of the console logs from the middleware:
```bash
process.env.ENV_WITH_INSTRUMENTATION: true
process.env.ENV_WITHOUT_CONFIGURE_NEXT: true
process.env.ENV_WITH_CONFIGURE_NEXT: undefined
```
7. **Compare Results:**
- Notice that `process.env.ENV_WITH_CONFIGURE_NEXT` is `undefined` despite being set in `next.config.js` after an asynchronous operation. This shows the environment variable is not accessible in the middleware context.
### Current vs. Expected behavior
**Current Behavior:**
- After starting the application, the `updateEnvVars` function in `next.config.js` is executed asynchronously with a delay.
- When making a request to trigger the middleware, the console logs show:
- `process.env.ENV_WITH_INSTRUMENTATION` is `'true'`.
- `process.env.ENV_WITHOUT_CONFIGURE_NEXT` is `'true'`.
- `process.env.ENV_WITH_CONFIGURE_NEXT` is `undefined`.
- This indicates that while `process.env.ENV_WITH_INSTRUMENTATION` and `process.env.ENV_WITHOUT_CONFIGURE_NEXT` are correctly logged, the environment variable `process.env.ENV_WITH_CONFIGURE_NEXT`, which is set asynchronously in `next.config.js`, is not accessible in the middleware.
**Expected Behavior:**
- When the application runs and the middleware is triggered, all environment variables set or modified in `next.config.js`, including those set asynchronously, should be available.
- The expected console log output in the middleware should be:
- `process.env.ENV_WITH_INSTRUMENTATION` as `'true'`.
- `process.env.ENV_WITHOUT_CONFIGURE_NEXT` as `'true'`.
- `process.env.ENV_WITH_CONFIGURE_NEXT` as `'true'`, indicating it has been set after the asynchronous operation completes.
- The environment variable `process.env.ENV_WITH_CONFIGURE_NEXT` should not be `undefined`; it should reflect its updated value after being set asynchronously in `next.config.js`.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:28:45 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6020
Binaries:
Node: 18.17.0
npm: 9.6.7
Yarn: 1.22.22
pnpm: 8.14.3
Relevant Packages:
next: 13.5.6
eslint-config-next: 13.4.12
react: 18.2.0
react-dom: 18.2.0
typescript: 5.1.6
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
_No response_ | bug,Middleware,Runtime | low | Major |
2,491,720,958 | three.js | Parse mesh url to load in threejs editor via search query/urlparam | ### Description
It would be useful for the threejs editor to support importing mesh files on load, by parsing a search-query or urlparam. The simplest solution would simply to load a single mesh file, but this could be extended to loading an arbitrary threejs project, or collection of meshes files via their urls.
### Solution
The mesh urls could be parsed via a `meshes` array search query like https://threejs.org/editor?meshes=[{mesh_url}]. The threejs project could be parsed via a `threejs_project` urlparam. This would probably happen inside the [dev/editor/index.html](https://github.com/mrdoob/three.js/blob/dev/editor/index.html)
### Alternatives
Equivalent feature requested on [f3d web viewer](https://github.com/f3d-app/f3d/issues/1595).
3dviewer.net does it this way https://3dviewer.net/#model=${mesh_url}, but this does not work with every mesh url.
### Additional context
This would make it easier to inspect mesh files lying accessible on the web via a server endpoint using the threejs editor. My interest is integrating a mesh viewer within STAC-browser (Spartio-Temporal Asset Catalogs) which is made to explore and standardize geospatial assets, among which meshes are an important part - see PR https://github.com/radiantearth/stac-browser/pull/465 for context | Editor | low | Minor |
2,491,776,042 | pytorch | Whether tensor parallelism supports the overlap of communication calculations for gradient computation, and how to implement it | ### 🚀 The feature, motivation and pitch
I want to know How to achieve the overlap of communication calculations when finding the gradient after row cutting/column cutting of the linear layer,thanks
The following is
https://pytorch.org/docs/2.3/distributed.tensor.parallel.html
### Alternatives
_No response_
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Minor |
2,491,785,369 | deno | Deno panic after pipe output to unknown command | Version: Deno 1.45.5
Platform: Max OS X 14.6.1
I accidentally typo'd in the command I wanted to pipe the output of coverage to. This generated a panic by Deno.
```bash
$ deno task coverage
Task coverage deno coverage | dun run mod.ts
dun: command not found
============================================================
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: macos aarch64
Version: 1.45.5
Args: ["/opt/homebrew/bin/deno", "coverage"]
thread 'main' panicked at library/std/src/io/stdio.rs:1118:9:
failed printing to stdout: Broken pipe (os error 32)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
| bug | low | Critical |
2,491,831,169 | godot | Switching IME language mid-input keeps non-committed text visible on Wayland | ### Tested versions
v4.3.1.rc.custom_build [ff9bc0422]
### System information
Godot v4.3.1.rc (ff9bc0422) - Manjaro Linux #1 SMP PREEMPT_DYNAMIC Wed Aug 7 16:19:28 UTC 2024 - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 7900 XT (RADV NAVI31) - AMD Ryzen 9 7900X 12-Core Processor (24 Threads)
### Issue description
On the Wayland version of the editor, on KDE with fcitx5 as IME, I can switch between French and Japanese. When typing Japanese, I have to commit text when converting kanji for instance; if I go back to French with non-committed text, this text will follow the caret (see video below).
https://github.com/user-attachments/assets/75878b5c-4884-4618-bd7a-d46ee6438f0a
On X11, non-committed text is deleted.
### Steps to reproduce
Write text in multiple languages, with at least one requiring committing text (like Japanese, probably Chinese as well, I don't know about other languages), and switch between languages with some uncommitted text.
Reminder: the issue occurs on KDE, Wayland editor with fcitx5 IME.
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd,topic:input,topic:gui | low | Minor |
2,491,841,194 | flutter | Line breaks are lost when selecting text by dragging handles in SelectionArea/SelectableRegion | ### Steps to reproduce
1. Create a SelectionArea or SelectableRegion component with the following content: text + line break + image + text.
2. Use the selection handles to select text from before the image to after the image.
3. Observe the selected text and notice that the line breaks are missing.
**Notably, selecting all content by clicking the 'Select All' button works correctly, and line breaks are not lost. However, when selecting text by dragging the handles, the line breaks are lost.**
### Expected results
The selected text should retain the line breaks to ensure accurate text selection and copying.
Run the example below and drag the handles up and down to select all content. The expected result should be as follows:
```
Select this icon
xxx
pure text pure text pure text
```
### Actual results
The selected text loses the line breaks, resulting in inaccurate text selection and copying.
And it will severely affect business logic based on SelectionArea, such as needing to determine whether the content is fully selected.
Run the example below and drag the handles up and down to select all content. The actual result is as follows:
```
Select this iconxxx
pure text pure text pure text
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() => runApp(SelectableRegionExampleApp());
class SelectableRegionExampleApp extends StatelessWidget {
const SelectableRegionExampleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: const Text('SelectableRegion Sample')),
body: Padding(
padding: const EdgeInsets.all(18.0),
child: SelectionArea(
child: Text.rich(TextSpan(children: [
const TextSpan(text:"Select this icon",style: TextStyle(fontSize: 30)),
const TextSpan(text:"\n",style: TextStyle(fontSize: 30)),
WidgetSpan(child: MySelectableAdapter(
child: Container(
color: Colors.red,
width: 60,
height: 130,
child: const Icon(Icons.key, size: 30),
))),
const TextSpan(text:"\n",style: TextStyle(fontSize: 30)),
const TextSpan(text: "pure text pure text pure text",style: TextStyle(fontSize: 30)),
])),
),
),
),
);
}
}
class MySelectableAdapter extends StatelessWidget {
const MySelectableAdapter({super.key, required this.child});
final Widget child;
@override
Widget build(BuildContext context) {
final SelectionRegistrar? registrar = SelectionContainer.maybeOf(context);
if (registrar == null) {
return child;
}
return MouseRegion(
cursor: SystemMouseCursors.text,
child: _SelectableAdapter(
registrar: registrar,
child: child,
),
);
}
}
class _SelectableAdapter extends SingleChildRenderObjectWidget {
const _SelectableAdapter({
required this.registrar,
required Widget child,
}) : super(child: child);
final SelectionRegistrar registrar;
@override
_RenderSelectableAdapter createRenderObject(BuildContext context) {
return _RenderSelectableAdapter(
DefaultSelectionStyle.of(context).selectionColor!,
registrar,
);
}
@override
void updateRenderObject(BuildContext context, _RenderSelectableAdapter renderObject) {
renderObject
..selectionColor = DefaultSelectionStyle.of(context).selectionColor!
..registrar = registrar;
}
}
class _RenderSelectableAdapter extends RenderProxyBox with Selectable, SelectionRegistrant {
_RenderSelectableAdapter(
Color selectionColor,
SelectionRegistrar registrar,
) : _selectionColor = selectionColor,
_geometry = ValueNotifier<SelectionGeometry>(_noSelection) {
this.registrar = registrar;
_geometry.addListener(markNeedsPaint);
}
static const SelectionGeometry _noSelection = SelectionGeometry(status: SelectionStatus.none, hasContent: true);
final ValueNotifier<SelectionGeometry> _geometry;
Color get selectionColor => _selectionColor;
late Color _selectionColor;
set selectionColor(Color value) {
if (_selectionColor == value) {
return;
}
_selectionColor = value;
markNeedsPaint();
}
// ValueListenable APIs
@override
void addListener(VoidCallback listener) => _geometry.addListener(listener);
@override
void removeListener(VoidCallback listener) => _geometry.removeListener(listener);
@override
SelectionGeometry get value => _geometry.value;
// Selectable APIs.
@override
List<Rect> get boundingBoxes => <Rect>[paintBounds];
// Adjust this value to enlarge or shrink the selection highlight.
static const double _padding = 10.0;
Rect _getSelectionHighlightRect() {
return Rect.fromLTWH(0 - _padding, 0 - _padding, size.width + _padding * 2, size.height + _padding * 2);
}
Offset? _start;
Offset? _end;
void _updateGeometry() {
if (_start == null || _end == null) {
_geometry.value = _noSelection;
return;
}
final Rect renderObjectRect = Rect.fromLTWH(0, 0, size.width, size.height);
final Rect selectionRect = Rect.fromPoints(_start!, _end!);
if (renderObjectRect.intersect(selectionRect).isEmpty) {
_geometry.value = _noSelection;
} else {
final Rect selectionRect = _getSelectionHighlightRect();
final SelectionPoint firstSelectionPoint = SelectionPoint(
localPosition: selectionRect.bottomLeft,
lineHeight: selectionRect.size.height,
handleType: TextSelectionHandleType.left,
);
final SelectionPoint secondSelectionPoint = SelectionPoint(
localPosition: selectionRect.bottomRight,
lineHeight: selectionRect.size.height,
handleType: TextSelectionHandleType.right,
);
final bool isReversed;
if (_start!.dy > _end!.dy) {
isReversed = true;
} else if (_start!.dy < _end!.dy) {
isReversed = false;
} else {
isReversed = _start!.dx > _end!.dx;
}
_geometry.value = SelectionGeometry(
status: SelectionStatus.uncollapsed,
hasContent: true,
startSelectionPoint: isReversed ? secondSelectionPoint : firstSelectionPoint,
endSelectionPoint: isReversed ? firstSelectionPoint : secondSelectionPoint,
selectionRects: <Rect>[selectionRect],
);
}
}
@override
SelectionResult dispatchSelectionEvent(SelectionEvent event) {
SelectionResult result = SelectionResult.none;
switch (event.type) {
case SelectionEventType.startEdgeUpdate:
case SelectionEventType.endEdgeUpdate:
final Rect renderObjectRect = Rect.fromLTWH(0, 0, size.width, size.height);
// Normalize offset in case it is out side of the rect.
final Offset point = globalToLocal((event as SelectionEdgeUpdateEvent).globalPosition);
final Offset adjustedPoint = SelectionUtils.adjustDragOffset(renderObjectRect, point);
if (event.type == SelectionEventType.startEdgeUpdate) {
_start = adjustedPoint;
} else {
_end = adjustedPoint;
}
result = SelectionUtils.getResultBasedOnRect(renderObjectRect, point);
case SelectionEventType.clear:
_start = _end = null;
case SelectionEventType.selectAll:
case SelectionEventType.selectWord:
case SelectionEventType.selectParagraph:
_start = Offset.zero;
_end = Offset.infinite;
case SelectionEventType.granularlyExtendSelection:
result = SelectionResult.end;
final GranularlyExtendSelectionEvent extendSelectionEvent = event as GranularlyExtendSelectionEvent;
// Initialize the offset it there is no ongoing selection.
if (_start == null || _end == null) {
if (extendSelectionEvent.forward) {
_start = _end = Offset.zero;
} else {
_start = _end = Offset.infinite;
}
}
// Move the corresponding selection edge.
final Offset newOffset = extendSelectionEvent.forward ? Offset.infinite : Offset.zero;
if (extendSelectionEvent.isEnd) {
if (newOffset == _end) {
result = extendSelectionEvent.forward ? SelectionResult.next : SelectionResult.previous;
}
_end = newOffset;
} else {
if (newOffset == _start) {
result = extendSelectionEvent.forward ? SelectionResult.next : SelectionResult.previous;
}
_start = newOffset;
}
case SelectionEventType.directionallyExtendSelection:
result = SelectionResult.end;
final DirectionallyExtendSelectionEvent extendSelectionEvent = event as DirectionallyExtendSelectionEvent;
// Convert to local coordinates.
final double horizontalBaseLine = globalToLocal(Offset(event.dx, 0)).dx;
final Offset newOffset;
final bool forward;
switch (extendSelectionEvent.direction) {
case SelectionExtendDirection.backward:
case SelectionExtendDirection.previousLine:
forward = false;
// Initialize the offset it there is no ongoing selection.
if (_start == null || _end == null) {
_start = _end = Offset.infinite;
}
// Move the corresponding selection edge.
if (extendSelectionEvent.direction == SelectionExtendDirection.previousLine || horizontalBaseLine < 0) {
newOffset = Offset.zero;
} else {
newOffset = Offset.infinite;
}
case SelectionExtendDirection.nextLine:
case SelectionExtendDirection.forward:
forward = true;
// Initialize the offset it there is no ongoing selection.
if (_start == null || _end == null) {
_start = _end = Offset.zero;
}
// Move the corresponding selection edge.
if (extendSelectionEvent.direction == SelectionExtendDirection.nextLine ||
horizontalBaseLine > size.width) {
newOffset = Offset.infinite;
} else {
newOffset = Offset.zero;
}
}
if (extendSelectionEvent.isEnd) {
if (newOffset == _end) {
result = forward ? SelectionResult.next : SelectionResult.previous;
}
_end = newOffset;
} else {
if (newOffset == _start) {
result = forward ? SelectionResult.next : SelectionResult.previous;
}
_start = newOffset;
}
}
_updateGeometry();
return result;
}
// This method is called when users want to copy selected content in this
// widget into clipboard.
@override
SelectedContent? getSelectedContent() {
return value.hasSelection ? const SelectedContent(plainText: 'xxx') : null;
}
LayerLink? _startHandle;
LayerLink? _endHandle;
@override
void pushHandleLayers(LayerLink? startHandle, LayerLink? endHandle) {
if (_startHandle == startHandle && _endHandle == endHandle) {
return;
}
_startHandle = startHandle;
_endHandle = endHandle;
markNeedsPaint();
}
@override
void paint(PaintingContext context, Offset offset) {
super.paint(context, offset);
if (!_geometry.value.hasSelection) {
return;
}
// Draw the selection highlight.
final Paint selectionPaint = Paint()
..style = PaintingStyle.fill
..color = _selectionColor;
context.canvas.drawRect(_getSelectionHighlightRect().shift(offset), selectionPaint);
// Push the layer links if any.
if (_startHandle != null) {
context.pushLayer(
LeaderLayer(
link: _startHandle!,
offset: offset + value.startSelectionPoint!.localPosition,
),
(PaintingContext context, Offset offset) {},
Offset.zero,
);
}
if (_endHandle != null) {
context.pushLayer(
LeaderLayer(
link: _endHandle!,
offset: offset + value.endSelectionPoint!.localPosition,
),
(PaintingContext context, Offset offset) {},
Offset.zero,
);
}
}
@override
void dispose() {
_geometry.dispose();
super.dispose();
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="439" alt="image" src="https://github.com/user-attachments/assets/2e0ca851-7330-4917-937e-66f99a77db95">
https://github.com/user-attachments/assets/5ecc3f42-fa69-43b1-9865-1ea3eee529ed
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel stable, 3.24.1, on macOS 14.5 23F79 darwin-arm64, locale zh-Hans-CN)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0-rc3)
[!] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.1.2)
[✓] VS Code (version 1.92.0)
[✓] Connected device (5 available)
[✓] Network resources
```
There are no issues with the Flutter environment. Some warnings are due to using a content repository address (involving privacy, which has been removed).
</details>
| framework,has reproducible steps,P2,f: selection,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.25 | low | Major |
2,491,882,286 | godot | Project corrupts when a project setting has a custom resource as its value | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - integrated Intel(R) UHD Graphics (Intel Corporation; 27.20.100.9415) - Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz (8 Threads)
### Issue description
After executing the following code:
```gdscript
# plugin.gd
@tool
extends EditorPlugin
func _enter_tree() -> void:
ProjectSettings.set_setting("corrupt/test/resource", CustomResource.new())
```
```gdscript
# CustomResource.gd
extends Resource
class_name CustomResource
```
And then reloading the project, the following happens:
1. The project doesn't reopen after closing.
2. After opening the project manager manually, the project manager says the project is missing from the file system. However, opening the location of the project in file explorer, it is still there. Hovering over the warning symbol shows "This project uses features unsupported by the current build: Unknown version".

Opening the project's `project.godot` file and changing
`test/resource=Object(Resource,"resource_local_to_scene":false,"resource_name":"","script":Resource("res://CustomResource.gd"))` to `test/resource=Object(Resource,"resource_local_to_scene":false,"resource_name":"")` (removing the resource's script) fixes the problem and after reloading the project manager the project shows up normally.
### Steps to reproduce
1. Create a new project.
2. Create a new script that extends `Resource` and give it a `class_name`.
3. Create a plugin.
4. In the plugin's `_enter_tree()` function, set a custom project setting to an instance of the resource defined above.
5. Disable and re-enable the plugin to run the code.
6. Attempt to reload the project (and see that it does not reopen).
7. Open the project manager and see that the project is corrupted.
8. Open the project's `project.godot` file and remove the `script=Resource(...)` from the value of the new setting.
9. Close and reopen the project manager and see that the project is not corrupted.
### Minimal reproduction project (MRP)
[Corrupt.zip](https://github.com/user-attachments/files/16782905/Corrupt.zip)
| bug,topic:core | low | Minor |
2,491,914,867 | godot | script_editor_cache.cfg stores old version of script filename resulting in "Case mismatch opening requested file" console spam. | ### Tested versions
4.3
### System information
Windows 10, Vulkan forward +, Nvidia 3070
### Issue description
I renamed a script file outside of the editor that had an upper case letter to match the standard snake_case filenames and, after restarting the project, got a bunch of spam in the console, even though nothing referenced the original filename:
`drivers/windows/file_access_windows.cpp:181 - Case mismatch opening requested file 'res://addons/console/Console.gd', stored as 'res://addons/console/console.gd' in the filesystem. This file will not open when exported to other case-sensitive platforms.`

### Steps to reproduce
Not sure if this is exclusive to addons, but I have a simple developer console, and it's had a non-standard filename, so I decided to fix this. In my active project, it wasn't a problem, as I renamed it within Godot, but when I replaced the addon with the updated version in a test project, I started getting errors, even though nothing referenced the original "Console.gd" filename. I tried enabling/disabling the addon and restarting the project several times and ensured nothing was referencing it in the autoload/plugin/scripts/etc.
Before:
```gdscript
func _enter_tree():
print("Console plugin activated.")
add_autoload_singleton("Console", "res://addons/console/Console.gd")
```
After:
```gdscript
func _enter_tree():
print("Console plugin activated.")
add_autoload_singleton("Console", "res://addons/console/console.gd")
```
Deleting the ".godot" directory fixes the issue, and upon further inspection, it seems the project_metadata.cfg was the culprit:
`scripts=["res://addons/console/console.gd", "res://addons/console/plugin.cfg", "res://addons/console/console_plugin.gd", "res://addons/console/Console.gd", "res://main_scene.gd", "res://spinning_cube.gd"]`
The script_editor_cache.cfg and editor_layout.cfg might also cause the problem as they also contain references to the file.
Edit: Seems the project_metadata.cfg does not cause it. The script_editor_cache definitely does, though. Editor_layout likely does as well. As a side note, it's kind of interesting how many times the error message pops up suggesting that Godot might be reading these files more often than it needs to and reducing these reads could speed up project reload times.
### Minimal reproduction project (MRP)
[test-console_case_error.zip](https://github.com/user-attachments/files/16783136/test-console_case_error.zip)
Note: .godot directory is necessary to reproduce the issue. | bug,topic:editor | low | Critical |
2,491,927,770 | go | net/http: js-wasm in nodejs HTTP requests fail | ### Go version
go version go1.22.0 darwin/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='amd64'
GOBIN=''
GOCACHE='/Users/sekulicd/Library/Caches/go-build'
GOENV='/Users/sekulicd/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/sekulicd/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/sekulicd/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/Cellar/go/1.22.0/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/Cellar/go/1.22.0/libexec/pkg/tool/darwin_amd64'
GOVCS=''
GOVERSION='go1.22.0'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/Users/sekulicd/go/src/tst/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/q2/t3b78q_d1d9br0p8q1zqd8nc0000gn/T/go-build1746037209=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I want to make HTTP request from Go WASM inside Node JS.
Here is sample main.go
```
//go:build js && wasm
// +build js,wasm
package main
import (
"io/ioutil"
"log"
"net/http"
)
func main() {
c := http.Client{
Transport: http.DefaultTransport,
}
resp, err := c.Get("https://httpbin.org/anything")
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatal(err)
}
log.Println(string(body))
}
```
When i try to execute WebAssembly with Node.js with cmd:
```
GOOS=js GOARCH=wasm go run -exec="$(go env GOROOT)/misc/wasm/go_js_wasm_exec" .
```
I get bellow error:
```
2024/08/28 14:11:49 Get "https://httpbin.org/anything": dial tcp: lookup httpbin.org on 192.168.17.142:53: write udp 127.0.0.1:4->192.168.17.142:53: write: Connection reset by peer
```
I understood that with [this PR](https://github.com/golang/go/pull/25550) if i add js/wasm build tags that http.DefaultTransport will use RoundTripper with fetch options.
### What did you see happen?
```
2024/08/28 14:11:49 Get "https://httpbin.org/anything": dial tcp: lookup httpbin.org on 192.168.17.142:53: write udp 127.0.0.1:4->192.168.17.142:53: write: Connection reset by peer
```
### What did you expect to see?
Response similar to bellow:
```
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
"Accept-Encoding": "gzip, deflate, br, zstd",
"Accept-Language": "en-US,en;q=0.9,sr;q=0.8,hr;q=0.7,bs;q=0.6,sh;q=0.5",
"Host": "httpbin.org",
"Priority": "u=0, i",
"Sec-Ch-Ua": "\"Not)A;Brand\";v=\"99\", \"Google Chrome\";v=\"127\", \"Chromium\";v=\"127\"",
"Sec-Ch-Ua-Mobile": "?0",
"Sec-Ch-Ua-Platform": "\"macOS\"",
"Sec-Fetch-Dest": "document",
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-Site": "none",
"Sec-Fetch-User": "?1",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36",
"X-Amzn-Trace-Id": "Root=1-66cf14ab-0caaf43c6309fce12a8d7bc2"
},
"json": null,
"method": "GET",
"origin": "77.222.25.88",
"url": "https://httpbin.org/anything"
}
``` | NeedsInvestigation,arch-wasm,OS-JS | low | Critical |
2,491,951,323 | stable-diffusion-webui | [Feature Request]: Want to disable the Prompts from file or textbox script automatically concatenating from the top prompt box. | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Won't concatenate the top prompt box either at the start or end, will just ignore it and only rely on the List of prompts inputs.
### Proposed workflow
A simple checkbox to disable in settings or a 'none' option under 'Insert prompts at the'
### Additional information
_No response_ | enhancement | low | Minor |
2,491,954,703 | godot | Softbody3D with custom mesh invisible depending on LOD bias | ### Tested versions
- 4.2.2 stable
- 4.3 stable
- 4.4 dev 1
- 4.4 dev 7
### System information
Godot v4.3.stable - Linux Mint 22 (Wilma) - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 (amdgpu; 6.7.0) - AMD Ryzen 5 3600 6-Core Processor (12 Threads)
### Issue description
I have a Softbody3D with a custom mesh (imported from Blender). It is only visible when changing the LOD bias to a high value. Lower values make it invisible or only temporarily visible. This only happens with a detailed custom mesh like in the video, not a default mesh.
With LOD bias = 1 - invisible
https://github.com/user-attachments/assets/fa44b59a-78fe-4b69-b915-b1566557be24
With LOD bias = 17 - temporarily visible
https://github.com/user-attachments/assets/0332945f-88d2-4abf-be84-e42da76cf3e8
With LOD bias = 128 (max) - visible without a problem
https://github.com/user-attachments/assets/72e1cbe2-8e39-4ecb-88ce-bdfe564aa71d
### Steps to reproduce
1. new SoftBody3D scene (setup as in the MRP, default settings + camera + light etc.)
2. set "Complex Custom Mesh.res" as mesh (it might work with other complex meshes as well)
3. play around with the LOD bias setting of the Softbody3D (1 was always invisible)
I have three Softbodys in the MRP:
Complex Custom Mesh Softbody - the one where the problem occurs
Simple Custom Mesh Softbody - also a custom mesh but only a simple plane where no problem occurs
Default Mesh Softbody - Engine default plane mesh where no problem occurs
[Complex Custom Mesh.res.zip](https://github.com/user-attachments/files/16783353/Complex.Custom.Mesh.res.zip)
### Minimal reproduction project (MRP)
[softbodyLodMrp.zip](https://github.com/user-attachments/files/16783312/softbodyLodMrp.zip)
| bug,topic:rendering,confirmed,topic:3d | low | Minor |
2,491,966,239 | transformers | Tensor size mismatch when trying to run RT-DETR on multiple gpus | ### System Info
- `transformers` version: 4.44.2
- Platform: Linux-5.4.0-174-generic-x86_64-with-glibc2.31
- Python version: 3.11.6
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.4
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: Tesla V100-DGXS-16GB
### Who can help?
@amyeroberts @muellerz @SunMarc
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Following the example on [the official pytorch example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/object-detection/run_object_detection.py) and [here](https://huggingface.co/docs/transformers/en/tasks/object_detection) it seems that I get the stack trace below after following these steps:
1. Set the initial model class to RT-DETR assuming other parts of the example have been followed
```
IMAGE_SIZE = 1280
CHECKPOINT = "PekingU/rtdetr_r50vd_coco_o365"
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForObjectDetection.from_pretrained(
CHECKPOINT,
id2label=id2label,
label2id=label2id,
anchor_image_size=None,
ignore_mismatched_sizes=True
)
```
2. Set the batch size to 4 and set to 4 visible GPUs assuming other parts of the example have been followed
```
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
training_args = TrainingArguments(
output_dir=output_path,
num_train_epochs=20,
max_grad_norm=0.1,
learning_rate=5e-5,
warmup_steps=300,
per_device_train_batch_size=4,
dataloader_num_workers=2,
metric_for_best_model="eval_map",
greater_is_better=True,
load_best_model_at_end=True,
eval_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
remove_unused_columns=False,
eval_do_concat_batches=False,
)
```
3. Run training
```
trainer = Trainer(
model=model,
args=training_args,
train_dataset=pytorch_dataset_train,
eval_dataset=pytorch_dataset_valid,
tokenizer=processor,
data_collator=collate_fn,
compute_metrics=eval_compute_metrics_fn,
)
trainer.train()
```
Stack trace:
```
RuntimeError Traceback (most recent call last)
Cell In[13], [line 11](vscode-notebook-cell:?execution_count=13&line=11)
[1](vscode-notebook-cell:?execution_count=13&line=1) trainer = Trainer(
[2](vscode-notebook-cell:?execution_count=13&line=2) model=model,
[3](vscode-notebook-cell:?execution_count=13&line=3) args=training_args,
(...)
[8](vscode-notebook-cell:?execution_count=13&line=8) compute_metrics=eval_compute_metrics_fn,
[9](vscode-notebook-cell:?execution_count=13&line=9) )
---> [11](vscode-notebook-cell:?execution_count=13&line=11) trainer.train()
File ~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:1938, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
[1936](https://vscode-remote+ssh-002dremote-002bbrain-002ds-002d1-002ehq-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:1936) hf_hub_utils.enable_progress_bars()
[1937](https://vscode-remote+ssh-002dremote-002brain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:1937) else:
-> [1938](https://vscode-remote+ssh-002dremote-002bbrain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:1938) return inner_training_loop(
[1939](https://vscode-remote+ssh-002dremote-002bbrain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:1939) args=args,
[1940](https://vscode-remote+ssh-002dremote-002bbrain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:1940) resume_from_checkpoint=resume_from_checkpoint,
[1941](https://vscode-remote+ssh-002dremote-002bbrain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:1941) trial=trial,
[1942](https://vscode-remote+ssh-002dremote-002bbrain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:1942) ignore_keys_for_eval=ignore_keys_for_eval,
[1943](https://vscode-remote+ssh-002dremote-002bbrain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:1943) )
File ~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:2279, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
[2276](https://vscode-remote+ssh-002dremote-002bbrain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:2276) self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
[2278](https://vscode-remote+ssh-002dremote-002bbrain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:2278) with self.accelerator.accumulate(model):
-> [2279](https://vscode-remote+ssh-002dremote-002bbrain-002ecom.vscode-resource.vscode-cdn.net/home/jb/repos/ml/ll_ml/models/rt_detr/~/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/trainer.py:2279) tr_loss_step = self.training_step(model, inputs)
...
File "/home/jb/.cache/pypoetry/virtualenvs/ml-Mf12zaqr-py3.11/lib/python3.11/site-packages/transformers/models/rt_detr/modeling_rt_detr.py", line 1850, in forward
reference_points_unact = torch.concat([denoising_bbox_unact, reference_points_unact], 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 16 but got size 4 for tensor number 1 in the list.
```
It seems that the first tensor is not split amongst the GPUs? Is there something else I'm missing here? I have not set up `accelerate` for instance, but the example linked does not state that it is required, and also states that at least one GPU is needed, making no specification about setting up `accelerate`.
### Expected behavior
I would expect to see training taking place as it does when I use just one GPU:

| bug | low | Critical |
2,492,024,612 | deno | Deno.connect() with timeout and/or AbortSignal | I encountered a problem with `Deno.connect()` and graceful program exit (e.g. with a SIGINT signal handler). If the remote peer of a TCP connection does not answer the `SYN`, for example because it is filtered by a firewall, Deno awaits a connection timeout from the kernel which can be more than 2 minutes on default linux installations (based on `/proc/sys/net/ipv4/tcp_syn_retries`). Exiting with `Deno.exit()` is still possible, of course, but it skips any shutdown and abort handlers defined in the application.
Can `Deno.connect()` be adjusted to accept either an `AbortSignal` or a custom timeout value? | feat,public API,ext/net | low | Minor |
2,492,041,815 | go | x/tools/gopls: support renaming a test package to make it external | #### What did you do?
Using gopls to develop Go tests. Working on a test package `foo`, I need to make it an external package `foo_test`, as otherwise I start running into import cycles when adding some more imports.
#### What did you expect to see?
Gopls should support renaming the package name in a test file from `foo` to `foo_test` via its rename action. It would effectively do the same that I now do manually, which is:
1) rename the package name from `foo` to `foo_test`
2) add an import of the original `foo` package
3) rewrite all unqualified references to the original `foo` package, such as `Bar` and `Bar.Baz`, to `foo.Bar` and `foo.Bar.Baz`.
Note that this process could fail if one of the unqualified references was unexported like `bar`, since that can't be reached via an import. I think it's fine for gopls to leave the broken `foo.bar` in this case, and let the user deal with the problem with the best solution - which might be to export the name, or to use an `export_test.go` file, or use existing exported API instead. But gopls should still aim to do as much as possible here, even if the result still has a few build errors.
#### What did you see instead?
> protocol error: ServerError(0): cannot rename to _test package
#### Build info
```
golang.org/x/tools/gopls v0.0.0-20240823192219-0734f6249fc1
golang.org/x/tools/gopls@v0.0.0-20240823192219-0734f6249fc1 h1:adMOkkhp86ogfEmvcqIVEfqTJiF3ZRVQc2kJRHxuv/o=
github.com/BurntSushi/toml@v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
github.com/google/go-cmp@v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
golang.org/x/exp/typeparams@v0.0.0-20221212164502-fae10dda9338 h1:2O2DON6y3XMJiQRAS1UWU+54aec2uopH3x7MAiqGW6Y=
golang.org/x/mod@v0.20.0 h1:utOm6MM3R3dnawAiJgn0y+xvuYRsm1RKM/4giyfDgV0=
golang.org/x/sync@v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/telemetry@v0.0.0-20240712210958-268b4a8ec2d7 h1:nU8/tAV/21mkPrCjACUeSibjhynTovgRMXc32+Y1Aec=
golang.org/x/text@v0.17.0 h1:XtiM5bkSOt+ewxlOE/aE/AKEHibwj/6gvWMl9Rsh0Qc=
golang.org/x/tools@v0.24.1-0.20240823192219-0734f6249fc1 h1:KqzLXpTyjOPebElwYOBIjaAiN+nsMpzGF5V+sXq6xwQ=
golang.org/x/vuln@v1.0.4 h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
honnef.co/go/tools@v0.4.7 h1:9MDAWxMoSnB6QoSqiVr7P5mtkT9pOc1kSxchzPCnqJs=
mvdan.cc/gofumpt@v0.6.0 h1:G3QvahNDmpD+Aek/bNOLrFR2XC6ZAdo62dZu65gmwGo=
mvdan.cc/xurls/v2@v2.5.0 h1:lyBNOm8Wo71UknhUs4QTFUNNMyxy2JEIaKKo0RWOh+8=
go: devel go1.24-36b45bca66 2024-08-26 22:29:43 +0000
``` | FeatureRequest,gopls,Tools | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.