id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,598,500,209 | flutter | [video_player_android] Use `handlesCropAndRotation` to detect the `SurfaceTexture` Impeller backend | When `handlesCropAndRotation` implemented in https://github.com/flutter/engine/pull/55434 makes it to stable, we should replace the logic implemented in https://github.com/flutter/packages/pull/7846 to detect the `SurfaceTexture` Impeller backend with `Build.VERSION.SDK_INT < 29` with it. | platform-android,p: video_player,P2,a: plugins,team-android,triaged-android | low | Minor |
2,598,500,657 | ui | [feat]: Clarify need for `tailwindcss-animate` on the `new-york` style. | ### Feature description
The [changelog](https://ui.shadcn.com/docs/components-json#style) seems to suggest that `tailwindcss-animate` is not used in the `new-york` style:
>The `default` style is the one you are used to. It's the one we've been using since the beginning of this project. It uses `lucide-react` for icons and `tailwindcss-animate` for animations.
>
>The `new-york` style is a new style. It ships with smaller buttons, cards with shadows and a new set of icons from [Radix Icons](https://icons.radix-ui.com/).
However, running `npx shadcn@latest init` and selecting the `new-york` style, results in the installation of `tailwindcss-animate`:
```bash
$ npx shadcn@latest init
โ The path ~/shadcnui does not contain a package.json file. Would you like to start a new Next.js project? โฆ yes
โ What is your project named? โฆ my-app
โ Creating a new Next.js project.
โ Which style would you like to use? โบ New York
โ Which color would you like to use as the base color? โบ Neutral
โ Would you like to use CSS variables for theming? yes
โ Writing components.json.
โ Checking registry.
โ Updating tailwind.config.ts
โ Updating app/globals.css
โ Installing dependencies.
โ Created 1 file:
- lib/utils.ts
Success! Project initialization completed.
You may now add components.
$ grep tailwindcss-animate my-app/package.json
"tailwindcss-animate": "^1.0.7"
```
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,598,504,872 | flutter | [Flutter GPU] Add RenderPass.setScissor. | The scissor can be used to efficiently clip draws to an integer rectangle on the framebuffer.
Allow the user to set the scissor on the `RenderPass` by passing in a `Scissor` class that just stores an integer rectangle.
There is no need to add functionality to "disable" the scissor. By default, the scissor is set to the render target texture size. So if the user wants to "disable" the scissor, they should simply call `setScissor` again and set it back to the render target texture size. | engine,P2,team-engine,triaged-engine,flutter-gpu | low | Minor |
2,598,512,807 | PowerToys | Ability to deal with '%' in `calculator` in `power toys run` | ### Description of the new feature / enhancement
In `power toys run` -> `calculator`, add the ability to deal with '%'.
Like 1000+5% = 1050, 1000*5% = 50, etc. Google calculator on android has this behaviour.
Additionally you can also put 5% of 1000 = 50
### Scenario when this would be used?
It will be very useful
### Supporting information
_No response_ | Help Wanted,Product-PowerToys Run,Run-Plugin | low | Minor |
2,598,514,271 | flutter | [Flutter GPU] Add RenderPass.setViewport. | The viewport determines the region and depth range of the framebuffer that the RenderPass draws to.
Allow the user to set the viewport on the RenderPass by passing in a Viewport class containing a floating point region rectangle and a floating point DepthRange (defaulting to zNear=0 and zFar=1).
Similar to the [scissor](https://github.com/flutter/flutter/issues/157199), there is no need to add functionality to "disable" the viewport. By default, the viewport's region is set to the render target texture size and the viewport's depth range is set to zNear=0 and zFar=1. So if the user wants to "disable" the viewport, they should simply call `setViewport` again and set it back to the default values. | engine,P2,team-engine,triaged-engine,flutter-gpu | low | Minor |
2,598,517,762 | deno | storybook: deno fails to init storybook in react+vite project | Version: Deno 2.0.2
โ `deno run -A npm:create-vite` (react, ts without swc)
โ `deno i`
โ `deno run -A npm:storybook@latest init`
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ
โ Adding Storybook version 8.3.6 to your project.. โ
โ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โข Detecting project type. โ
Installing dependencies...
npm error Cannot read properties of null (reading 'matches')
npm error A complete log of this run can be found in: /Users/admin/.npm/_logs/2024-10-18T22_52_29_828Z-debug-0.log
An error occurred while installing dependencies.
attention => Storybook now collects completely anonymous telemetry regarding usage.
This information is used to shape Storybook's roadmap and prioritize features.
You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
https://storybook.js.org/telemetry
``` | bug,node compat | low | Critical |
2,598,527,599 | TypeScript | Design Meeting Notes, 9/6/2024 |
# Consistency for Computed Properties in Classes
#59860
* Working on making computed properties work consistently between source files and declaration files.
* In object literals, anything goes.
* In classes, they do not behave the same.
* We error if a computed property doesn't resolve to a specific well-known symbol, unique symbol, or literal type.
* For computed methods.... we say nothing?
* Want to make classes consistent with object literals.
* This PR just fixes computed
* Downsides?
* It breaks code (obviously).
* Still need to run on top codebases.
* Does this just change classes, or also interfaces?
* Just classes.
* Do we really need to adopt the behavior where it produces index signatures?
* It's the only thing that has consistent behavior for isolated declarations?
* What if you have multiple computed properties with different types?
* Just use the union of the types in a single index signature, but not necessarily the best behavior we could do - would be a follow up.
* Can have multiple index signatures these days, so maybe we just produce multiple index signatures.
* What happens if you don't know the type?
* Just create a string index signature right now so that it's consistent with object types.
* Do you really want these index signatutes?
* In the object literal case, there is no way to add an index signature to the object literal.
* But a class is sort of used as a hybrid between a JavaScript value declaration and a TypeScript type.
* We like consistency, but feel a little shaky about just introducing index signatures (the follow-up).
* Can do that behavior, but can still error possibly?
# Node, Type, and Signature monomorphization
* #59190
* #59191
* #59192
* Moves non-monomorphic properties to a `data` grab-bag
* Causes v8 to optimize this correctly
* getters/setters in the prototype chain expose the `data.` properties for compatibility
* You can put arbitrarily many getters in the same chain for convenience since they're 1:1
* We still go through these in this PR and see wins; we could go to .data instead and likely get faster?
* Big big big speedup, slightly more memory usage
* Concern: *all* these properties show up in a debugger
* API consumers will see all these properties too; implications for `in`
* Can we provide API users with a more-specific facade?
* Work in progress, numbers on this approach soon
* Conclusion: Check on Ron's version and merge whatever looks best overall
| Design Notes | low | Critical |
2,598,527,779 | TypeScript | Design Meeting Notes, 9/17/2024 |
# Extension Rewriting
https://github.com/microsoft/TypeScript/pull/59767
* We've discussed what we need to do with async imports.
* One possibility: you do the transform yourself.
* Another: we provide a runtime shim - possibly overridable.
* Also: we have the same situation for `require` calls in JS input files.
* It is currently not as common, but possible, for us to transform JavaScript input files.
* Those files can still refer to `.ts` files - do we want rewriting for those?
* Does the experimental mode in Node support `require`?
* *Pretty sure* it does?
* There's `require` with a plain string and an arbitrary expression.
* Why do we care about this difference?
* `require` can be passed into arbitrary places.
* Makes us inclined to say no shimming.
* We could error on `require` of an arbitrary expression in this mode.
* People can always alias `require` or `// @ts-ignore`.
* Extension rewriting was always weird - but if we're gonna do it, maybe we should do it in a maximalist way?
* Feels like if semantics diverges between runtime and emit, that's a problem.
* Could just say only `require` on static imports.
* Can you just create arbitrary `require`s?
* Yes, there's `createRequire` - could maybe monkey-patch this?
* Also `require.call(...)`
* Could add an import attribute to turn off rewriting for `import()`.
* Moved from "don't shim anything" to "shim everything".
* Between import attributes and `(require)(...)` these are reasonable opt-outs.
* Also open to just a global flag...
* Or a third state for the flag.
* `"none"`, `"staticImportsOnly"`, `"bestEffort"`?
| Design Notes | low | Critical |
2,598,528,127 | TypeScript | Design Meeting Notes, 9/24/2024 | # Always Report `useDefineForClassFields`-Related Errors
https://github.com/microsoft/TypeScript/pull/59623
* When a dev writes a parameter property and a class field depends on the parameter property, we error under `useDefineForClassFields` (or if implicitly in a newer target).
* We want to issue the error regardless of the target.
* We like the idea but it could cause a lot of issues. No code fix today which makes it harder.
* We will just deprecate `useDefineForClassFields` in 6.0.
* Should clarify what the plan is for this (@RyanCavanaugh) - is the idea that we just flip the emit and errors over in 6.0 and let people set `useDefineForClassFields`+`ignoreDeprecations` until 6.5?
# Perform subtype reduction on call expression types if type argument arity fails
https://github.com/microsoft/TypeScript/pull/60036
* Let's imagine `Array<T> | ReadonlyArray<T>`.
* They both have the same `reduce` method overloads. Two of them are considered identical, but one generic signature which *is* the same isn't surfaced because we don't consider common methods as identical.
* So if you write `reduce<...>(...)`, that'll fail.
* The idea here is that if you write a type argument and it doesn't match any signature's arity, the checker will subtype-reduce the expression type and try again.
* Seems very special-casey.
* Why are you writing `Array<T> | ReadonlyArray<T>`? Why not just write `ReadonlyArray<T>`?
* One weirdness here is the fact that you need to provide type arguments for this to kick in. This is a little bit ad-hoc and makes it hard to reason about. As a user, you wouldn't necessarily expect type arguments to trigger any special behavior.
* `Array.prototype.reduce` is a pretty special case and
* Maybe there should be a special rule about `Array<T> | ReadonlyArray<T>` being reduced appropriately?
* We have other special-cases around arrays, why is this one bad?
* Those were not ideal, but they were motivated by common pain-points around operating on arrays. The issue this addresses doesn't have a lot of feedback other than the current report.
* For now we've decided not to pursue this change. | Design Notes | low | Critical |
2,598,528,479 | TypeScript | Design Meeting Notes, 10/1/2024 | # The `internal` Modifier
https://github.com/microsoft/TypeScript/issues/5228
* We've discussed an `internal` modifier on-and-off since 2015.
* High ๐ count, but haven't pursued it.
* Today you can use `/** @internal */` JSDoc comments to mark things as internal, and `--stripInternal`.
* Previously, `--stripInternal` was not even publicly documented.
* Also, have to do hacks to make `.d.ts` bundling work with this.
* Would be nice to have something more formal.
* Also, `internal` allows you to know whether something is getting overridden in a subtype.
* Idea:
* `internal` stays in `.d.ts` output.
* Project package or project boundary.
* Only operates in the same compilation.
* Not allowed to `import` an `export internal`.
* At the `tsconfig.json` level, internal access might be granted and asserted in a few ways:
* Maybe an option for specific packages
```json5
{
"compilerOptions": {
// Grant access to 'consuming-package':
"internalsVisibleTo": ["consuming-package"],
// Assert access to 'dependency-package':
"internalsVisibleFrom": ["dependency-package"],
}
}
```
* Possibly just something on each reference
```json5
{
"references": [
{ "path": "../package", "allowInternals": true }
]
}
```
* What are the rules?
* Can't mix (e.g. no `public` and `internal` in the same declaration).
* Why not JSDoc?
* Relatively expensive for this.
* What should tools do with `internal`?
* Should bundlers limit access to an `internal` member that's `export *`'d?
* We'd sort of hope "no".
* Does the declaration emitter have to worry about this now?
* What happens when you run `keyof` on a type with `internal`?
* Probably should do the same thing that `private` does.
* But does that mean that `keyof` means something different depending on where it's written.
* How does this work for overload resolution?
* Do overloads get skipped?
* Do they get related appropriately?
* The "someone subclassed this" issue is not something we've heard a ton of.
* Do most people use `.d.ts` bundlers that can do this post-processing?
* No, mostly not.
* Some of our build tooling would be simpler if we had this.
* But doesn't it mean that we're just making other stuff more complex?
* For example, `public` and `private` overloads can't be mixed today.
* Well maybe we do need to explore that restriction.
* Also, some of the point of `--stripInternal` is to not make things publicly known and avoid documenting functions to prevent usage.
* Back to this question again - what do you do when you have a mapped type?
* `internal` is inaccessible *depending* on where it's used.
* `private` and `protected` has this, but it's not witnessable in the majority of code (which is outside of the class).
* Coming back to `keyof` - is this now a location-sensitive operation?
* The idea that you will get different types based on instantiation location is not something we've done before.
* Really don't like that.
* What do you want?
* Scope visibility to peers.
* Keep existence undocumented.
* Do we really want an ergonomic way for people to import something marked `internal`?
* We don't have to do everything listed above.
* Can do this just on the module level.
* Maybe even just that and the property level.
* "Good enough" as `readonly`.
* Cautious but curious around a solution.
* Not convinced that we would have a solution that we'd entirely be happy with.
* Must preserve existing invariants around how type information appears.
* One opinion: sympathetic to a `package`-like modifier, but doesn't entirely solve the problem for our own usage for hiding entities in the resulting declaration file.
| Design Notes | low | Minor |
2,598,528,786 | TypeScript | Design Meeting Notes, 10/11/2024 |
# Conditional Type Narrowing
#56941
* Today, we check an expression like
```ts
arg === 1 ? "someString" : 37
```
by getting the type of both branches and unioning them - and we can't make a determination about how either branch corresponds to the condition.
* In the experimental PR, each branch is checked the expected type.
* This is a breaking change, but it catches some desirable breaks.
* For example:
```
// Currently the expression's type is `any` and we check that against `number`,
// but checked individually, the `string` is correctly caught.
let x: number = arg === 1 ? "someString" : getAnAny() as any;
```
* Breaks?
* Most are true bugs
* Good chunk are moves in error positions (breaks `ts-expect-error`)
* Some unlikely to be real bugs.
* The motivation was conditional type narrowing - if you think of first principals, you could consider that the *conditional expression* creates a conditional type.
* Not too hard to do, but
* Need to be able to "crack into" each branch of the conditional type for the `return` statement case as well.
* You also might not get the "right" conditional type. For example
```
function f(x: T): T extends string ? string : number {
return x === undefined ? someString : someNumber;
}
```
* Do you end up synthesizing `T extends string ? ... : ...` or do you create `T extends undefined ? ... : ...`?
* Also, error messages won't be quite as good.
* Out of time
# Slim AST Experiments with Shared Structs in the Compiler
#59992
* Partially inspired by https://github.com/microsoft/TypeScript/pull/59190
* Uses flagged functionality for `SharedStruct`s via API (no syntax for shared structs yet).
* Idea: every Node is has single a fixed layout.
* Also experimenting with a version that uses shared structs.
* Separately: a different experiment Uses a "slim AST" which creates a facade to the real AST for API compat.
* Experimental parser that uses this.
* You get a speed-up similar to #59190, though it's at the expense of more memory.
* Much more (why?)
* If you use shared structs as the backing store for the slim AST, you lose some speed (we anticipate more optimizations with collaboration from V8), but possibly win back some memory and are able to run across multiple threads and you get a net perf win.
| Node Type | Allocation Source | Time (seconds) |
|-------------|---------------------------------|----------------|
| Current AST | Plain objects | 1.76 |
| slim-ast | plain objects | 1.562 |
| slim-ast | shared structs | 2.013 |
| slim-ast | shared structs across 8 workers | 1.082 |
| Design Notes | low | Critical |
2,598,529,136 | TypeScript | Design Meeting Notes, 10/18/2024 | # Conditional Type Narrowing
#56941
* Shouldn't be any perf impact for existing code because most code doesn't actually use a conditional type as the return type of a function, or it just casts.
* It is a semantic check on the return type (not syntax - you can use type aliases).
* If we want to support this kind of narrowing, we need to dive into the conditional expression and check each branch separately.
* Original PR did this for every function, caught bugs - but maybe we generalize there later on. For now, the current check is only when checking against a a conditional return type.
* How does this work with contextual types?
```ts
function makeEquals<T extends string | number>(x: T):
T extends string
? (x: string) => boolean
: (x: number) => boolean {
// are these well-typed?
return typeof x === "string" ? (y => x === y) : (y => x === y);
}
```
* Currently it does work. See PR for details.
* Seems like no objections to current design - plan for 5.8. | Design Notes | low | Critical |
2,598,533,480 | node | V8 13.0 Deprecations | Another PR is updated V8 to 13.0, so I compiled a list of deprecations from that version onward. While they don't need to be fixed immediately, it's important to know they exist.
- [ ] **`template<class T> struct v8::FastApiTypedArray`**
- **Reason**: When an API function expects a TypedArray as a parameter, the type in the signature should be `v8::Local<v8::Value>` instead of `FastApiTypedArray<>`.
- **Action**: Type-check the parameter and convert it to `v8::Local<v8::TypedArray>` to access the data. Handle the parameter the same way as for a regular API call.
- **Details**:
```plaintext
โtemplate<class T> struct v8::FastApiTypedArrayโ is deprecated:
When an API function expects a TypedArray as a parameter,
the type in the signature should be `v8::Local<v8::Value>` instead of FastApiTypedArray<>.
The API function then has to type-check the parameter and convert it to a `v8::Local<v8::TypedArray` to access the data.
In essence, the parameter should be handled the same as for a regular API call.
```
- [x] **`v8::Local<v8::Value> v8::Object::GetPrototype()`** (https://github.com/nodejs/node/pull/55453)
- **Reason**: V8 will stop providing access to the hidden prototype (i.e., JSGlobalObject).
- **Action**: Use `GetPrototypeV2()` instead.
- **Reference**: [crbug.com/333672197](http://crbug.com/333672197)
- **Details**:
```plaintext
โv8::Local<v8::Value> v8::Object::GetPrototype()โ is deprecated:
V8 will stop providing access to hidden prototype (i.e. JSGlobalObject).
Use GetPrototypeV2() instead. See http://crbug.com/333672197.
```
- [x] **`v8::Maybe<bool> v8::Object::SetPrototype(v8::Local<v8::Context>, v8::Local<v8::Value>)`** (https://github.com/nodejs/node/pull/55453)
- **Reason**: V8 will stop providing access to the hidden prototype (i.e., JSGlobalObject).
- **Action**: Use `SetPrototypeV2()` instead.
- **Reference**: [crbug.com/333672197](http://crbug.com/333672197)
- **Details**:
```plaintext
โv8::Maybe<bool> v8::Object::SetPrototype(v8::Local<v8::Context>, v8::Local<v8::Value>)โ is deprecated:
V8 will stop providing access to hidden prototype (i.e. JSGlobalObject).
Use SetPrototypeV2() instead. See http://crbug.com/333672197.
```
- [ ] **`v8::SnapshotCreator::SnapshotCreator(v8::Isolate*, const intptr_t*, const v8::StartupData*, bool)`** (#55337)
- **Reason**: Deprecated in favor of a version that passes `CreateParams`.
- **Action**: Use the version that passes `CreateParams`.
- **Details**:
```plaintext
โv8::SnapshotCreator::SnapshotCreator(v8::Isolate*, const intptr_t*, const v8::StartupData*, bool)โ is deprecated:
Use the version that passes CreateParams instead.
```
- [ ] **`v8::String::Value::Value(v8::Isolate*, v8::Local<v8::Value>)`** (#55458)
- **Reason**: Prefer alternatives for better performance.
- **Action**: Use `String::ValueView` if possible, or use `string->Write` to a buffer if not.
- **Details**:
```plaintext
โv8::String::Value::Value(v8::Isolate*, v8::Local<v8::Value>)โ is deprecated:
Prefer using String::ValueView if you can, or string->Write to a buffer if you cannot.
```
- [ ] **`void v8::Isolate::AttachCppHeap(v8::CppHeap*)`**
- **Reason**: Set the heap on Isolate creation.
- **Action**: Use `CreateParams` instead.
- **Details**:
```plaintext
โvoid v8::Isolate::AttachCppHeap(v8::CppHeap*)โ is deprecated:
Set the heap on Isolate creation using CreateParams instead.
```
- [ ] **`void v8::Isolate::DetachCppHeap()`**
- **Reason**: Set the heap on Isolate creation.
- **Action**: Use `CreateParams` instead.
- **Details**:
```plaintext
โvoid v8::Isolate::DetachCppHeap()โ is deprecated:
Set the heap on Isolate creation using CreateParams instead.
```
| c++,v8 engine,deprecations | low | Critical |
2,598,555,364 | flutter | Dialog widget requires roles, ARIA properties, and keyboard support (Accessibility) | ### Use case
The Dialog widget requires roles, ARIA properties, and expected keyboard support to meet accessibility compliance requirements outlined in the Web Content Accessibility Guidelines (WCAG).
### Proposal
### WAI-ARIA Roles, States, and Properties
- The element that serves as the dialog container has a `role="dialog"`
- All elements required to operate the dialog are descendants of the element that has `role="dialog"`
- The dialog container element has `aria-modal="true"`
- Provide a label for the dialog by using `aria-label` OR `aria-labelledby="[IDREF]"` on the container element, with [IDREF] referencing the unique ID of the element that describes the purpose of the dialog
### Focus management
- Set focus to either the dialogโs main heading or the first interactive element within the dialog. There must be a visible focus indicator.
- Maintain focus within the modal dialog until the dialog is closed.
- Upon closing the dialog, send focus back to the control that prompted it, or to the next logical place on the page. There must be a visible focus indicator.
### Keyboard navigation
- **Tab**:
- Moves focus to the next tabbable element inside the dialog.
- If focus is on the last tabbable element inside the dialog, moves focus to the first tabbable element inside the dialog.
- **Shift + Tab**:
- Moves focus to the previous tabbable element inside the dialog.
- If focus is on the first tabbable element inside the dialog, moves focus to the last tabbable element inside the dialog.
- **Esc**: Closes the dialog. | c: new feature,framework,f: material design,a: accessibility,platform-web,c: proposal,P1,customer: castaway,team-accessibility,triaged-accessibility | medium | Minor |
2,598,563,081 | flutter | AlertDialog widget requires roles, ARIA properties, and keyboard support (Accessibility) | ### Use case
The AlertDialog widget requires roles, ARIA properties, and expected keyboard support to meet accessibility compliance requirements outlined in the Web Content Accessibility Guidelines (WCAG).
### Proposal
### WAI-ARIA Roles, States, and Properties
- The element that serves as the alert dialog container has a `role="alertdialog"`
- All elements required to operate the alert dialog are descendants of the element that has `role="alertdialog"`
- The alert dialog container element has `aria-modal="true"`
- Provide a label for the alert dialog by using `aria-label` OR `aria-labelledby="[IDREF]"` on the container element, with [IDREF] referencing the unique ID of the element that describes the purpose of the alert dialog
- The element with `role="alertdialog"` has a value set for `aria-describedby` that refers to the element containing the alert message.
### Focus management
- Set focus to either the alert dialogโs main heading or the first interactive element within the dialog. There must be a visible focus indicator.
- Maintain focus within the alert dialog until the dialog is closed.
- Upon closing the alert dialog, send focus back to the control that prompted it, or to the next logical place on the page. There must be a visible focus indicator.
### Keyboard navigation
- **Tab**:
- Moves focus to the next tabbable element inside the alert dialog.
- If focus is on the last tabbable element inside the alert dialog, moves focus to the first tabbable element inside the alert dialog.
- **Shift + Tab**:
- Moves focus to the previous tabbable element inside the alert dialog.
- If focus is on the first tabbable element inside the alert dialog, moves focus to the last tabbable element inside the alert dialog.
- **Esc**: Closes the dialog. | c: new feature,framework,f: material design,a: accessibility,platform-web,c: proposal,P1,customer: castaway,team-accessibility,triaged-accessibility | medium | Minor |
2,598,588,900 | flutter | [Impeller] libImpeller: Crash in FreeType during release of a typography context. | Reported when creating and releasing a single typographer context:
```
#1 0x00007ffff7ce42b1 in SkTypeface_FreeTypeStream::~SkTypeface_FreeTypeStream() () from libimpeller.so
#2 0x00007ffff7ce42e9 in SkTypeface_FreeTypeStream::~SkTypeface_FreeTypeStream() () from libimpeller.so
#3 0x00007ffff7d4b58d in skia::textlayout::TypefaceFontStyleSet::~TypefaceFontStyleSet() () from libimpeller.so
#4 0x00007ffff7d4b436 in skia::textlayout::TypefaceFontProvider::~TypefaceFontProvider() () from libimpeller.so
#5 0x00007ffff7e8b086 in txt::FontCollection::~FontCollection() ()
from libimpeller.so
#6 0x00007ffff7a8dbd9 in impeller::interop::TypographyContext::~TypographyContext() () from libimpeller.so
```
cc @lyceel
[This test should be creating a collecting a context](https://github.com/flutter/engine/blob/08170c44b0f8932cadf8c7e01f139781c571530c/impeller/toolkit/interop/impeller_unittests.cc#L248) so it is unclear what is going on here. Debugging. | P3,e: impeller,team-engine,triaged-engine,e: libimpeller | low | Critical |
2,598,608,037 | PowerToys | Mouse Without Boarders - can't find other PC after update | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Auto-Update PT with Mouse Without Borders.
I only have 3 modules activated/enabled:
Image Resizer
PowerRename
Mouse Without borders
### โ๏ธ Expected Behavior
I was expecting that using the Auto-Update feature in PT that my settings would remain the same and I would not have to reconfigure anything once the update was complete - or that there would be a notification as to what settings are not saved and restored upon updating so that I would know what changes need to be done so that the feature would work.
### โ Actual Behavior
After updating both my Windows 11 PCs with the latest PowerToys update, neither computer can see each other.
I tried rebooting both PCs, uninstalling and reinstalling PT from Github, rebooting, and still, neither computer can connect and share the mouse. I have tried the mouse/keyboard directly connected to both machines (using a USB only KVM) -- still cannot access the other computer's mouse or keyboard functions.
Initially, before the update, I was running both PT on both computers as services (PowerToys.MWB.Service) and after the update via the auto-update function neither computer retained the Service status, so I fully closed PT and restarted it as an Admin. I then set MWB to run as a service, restrarted both computers again (and one of the two computers is running as a very busy Plex Server, so it is really inconvenient to restart that computer) and they still can't find each other.
I do have the Original MWB UI also enabled as well as all the Keyboard shortcuts disabled (as I have always done).
I also have the IP Mappings designated for both machines on both computers IP Mappings page. on occastion I get a message about this settting being used, but it can not locate the other device(s) -- I can't remember the exact popup message.
I will also say that I ran the "Add Firewall rule...." batch on both machines and this did not resolve the issue either.
Any help in solving this issue would be helpful, and I am willing to take any and all suggestions and report back to you the results to help this issue from occuring again in the future.
** I have experienced this or a similar issue with the last (2) auto-updates, but I can't recall how they ended up being resolved since I tried so many (much of the same reported above) different things.
I have a very basic/boring computer setup and since this has happed more than once, I fell confident that others are having the same or similar issue.
Looking at Task Manager (for the first time since this issue) I do see two instances of PowerToys.MouseWithoutBorders running under Details and have created and attached memory dumps from each of them.<!-- Failed to upload "PowerToys.MouseWithoutBorders <!-- Failed to upload "PowerToys.MouseWithoutBorders.zip" -->
** Well, I tried to upload each individual DMP file as well as a 2nd attempt to upload them as one Zip, but they all failed with no explanation as to why (size, file format, etc).
<!-- Failed to upload "PowerToys.MouseWithoutBorders.zip" -->
### Other Software
The apps/programs on each of the two computers I use Mouse WIthout Borders on have MINIMAL installed apps, services and driveres.
Both are up to date with all available updates, Windows and otherwise.
- One PC is running Windows 11 Pro Insider Preview Build 27729.rs
- The other is running Windows 11 Pro 24H2 with current feature Experience Pack.
the specific PT MWB version is the same as the main PT version: 0.85.1.0[MWB Mini Log - Last8wide (Windows 11 (24H2).txt](https://github.com/user-attachments/files/17441821/MWB.Mini.Log.-.Last8wide.Windows.11.24H2.txt)
[MWB Mini Log - wDesktop (Windows 11 Preview).txt](https://github.com/user-attachments/files/17441820/MWB.Mini.Log.-.wDesktop.Windows.11.Preview.txt)
<!-- Failed to upload "PowerToys.MouseWithoutBorders.zip" -->
| Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Mouse Without Borders | low | Critical |
2,598,660,986 | vscode | Cannot style Chinese character into bold text. | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
- OS Version:
Steps to Reproduce:
1. the "**" function works well with English, but not Chinese.


3. code to reproduce:
`text
*italic text*
**bold text**
**ไธญๆๅ ็ฒ** ` | bug,editor-input-IME | low | Critical |
2,598,677,967 | yt-dlp | Regex for dropbox doesn't support https://dropbox.com/scl/fo URL format | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
World
### Provide a description that is worded well enough to be understood
The current dropbox extract supports dropbox URLs in the format of "https://www.dropbox.com/scl/fi/" however dropbox also uses the same video player on urls of the format of "https://www.dropbox.com/scl/fo"
The current regex for dropbox does not match this URL type and falls back to the generic extractor which fails.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.dropbox.com/scl/fo/z1o95jkbbnkmoc9gvirzw/h?e=2&preview=WJBK_12-09-2022_07.44.15.mp4&rlkey=6xx9whg0h81afi1z7cwlunvx8&dl=0']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.10.16.232911 from yt-dlp/yt-dlp-nightly-builds (linux_exe)
[debug] Python 3.11.10 (CPython x86_64 64bit) - Linux-6.11.2-arch1-1-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2 (setts), ffprobe 7.0.2, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.10.16.232911 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.10.16.232911 from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://www.dropbox.com/scl/fo/z1o95jkbbnkmoc9gvirzw/h?e=2&preview=WJBK_12-09-2022_07.44.15.mp4&rlkey=6xx9whg0h81afi1z7cwlunvx8&dl=0
[generic] h?e=2&preview=WJBK_12-09-2022_07.44.15: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] h?e=2&preview=WJBK_12-09-2022_07.44.15: Extracting information
[debug] Looking for embeds
[redirect] Following redirect to https://www.dropbox.com/scl/fo/z1o95jkbbnkmoc9gvirzw/h?dl=0&e=2&noscript=1&preview=WJBK_12-09-2022_07.44.15.mp4&rlkey=6xx9whg0h81afi1z7cwlunvx8
[generic] Extracting URL: https://www.dropbox.com/scl/fo/z1o95jkbbnkmoc9gvirzw/h?dl=0&e=2&noscript=1&preview=WJBK_12-09-2022_07.44.15.mp4&rlkey=6xx9whg0h81afi1z7cwlunvx8
[generic] h?dl=0&e=2&noscript=1&preview=WJBK_12-09-2022_07.44.15: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] h?dl=0&e=2&noscript=1&preview=WJBK_12-09-2022_07.44.15: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.dropbox.com/scl/fo/z1o95jkbbnkmoc9gvirzw/h?dl=0&e=2&noscript=1&preview=WJBK_12-09-2022_07.44.15.mp4&rlkey=6xx9whg0h81afi1z7cwlunvx8
Traceback (most recent call last):
File "yt_dlp/YoutubeDL.py", line 1626, in wrapper
File "yt_dlp/YoutubeDL.py", line 1761, in __extract_info
File "yt_dlp/extractor/common.py", line 741, in extract
File "yt_dlp/extractor/generic.py", line 2533, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.dropbox.com/scl/fo/z1o95jkbbnkmoc9gvirzw/h?dl=0&e=2&noscript=1&preview=WJBK_12-09-2022_07.44.15.mp4&rlkey=6xx9whg0h81afi1z7cwlunvx8
```
| site-bug,patch-available | low | Critical |
2,598,687,306 | flutter | Icon tree shake for 3rd party library not working in windows build | ### Steps to reproduce
Create and build template project
1. `flutter create flutter_project`
2. `cd flutter_project`
3. `flutter pub add material_symbols_icons`
4. Apply patch below
5. `flutter build windows --tree-shake-icons`
6. View build assets `build\windows\x64\runner\Release\data\flutter_assets\packages\material_symbols_icons\lib\fonts`
```patch
--- a/main.orig.dart
+++ b/main.dart
@@ -1,4 +1,5 @@
import 'package:flutter/material.dart';
+import 'package:material_symbols_icons/material_symbols_icons.dart';
void main() {
runApp(const MyApp());
@@ -112,6 +113,7 @@ class _MyHomePageState extends State<MyHomePage> {
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
+ const Icon(Symbols.abc),
],
),
),
```
### Expected results
As a comparison, fonts size in android build.
Build command `flutter build apk --tree-shake-icons`
Extract from `build\app\outputs\apk\release\app-release.apk!/assets/flutter_assets/packages/material_symbols_icons/lib/fonts`
```
-rw-rw-r-- 1 kr328 kr328 3.8K Jan 01 1981 MaterialSymbolsOutlined.ttf
-rw-rw-r-- 1 kr328 kr328 2.6K Jan 01 1981 MaterialSymbolsRounded.ttf
-rw-rw-r-- 1 kr328 kr328 2.3K Jan 01 1981 MaterialSymbolsSharp.ttf
```
### Actual results
Fonts size in windows build.
Files in `build\windows\x64\runner\Release\data\flutter_assets\packages\material_symbols_icons\lib\fonts`.
```
-rw-rw-r-- 1 kr328 kr328 8.6M Oct 19 01:57 MaterialSymbolsOutlined.ttf
-rw-rw-r-- 1 kr328 kr328 12.3M Oct 19 01:57 MaterialSymbolsRounded.ttf
-rw-rw-r-- 1 kr328 kr328 7.1M Oct 19 01:57 MaterialSymbolsSharp.ttf
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:material_symbols_icons/material_symbols_icons.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// TRY THIS: Try running your application with "flutter run". You'll see
// the application has a purple toolbar. Then, without quitting the app,
// try changing the seedColor in the colorScheme below to Colors.green
// and then invoke "hot reload" (save your changes or press the "hot
// reload" button in a Flutter-supported IDE, or press "r" if you used
// the command line to start the app).
//
// Notice that the counter didn't reset back to zero; the application
// state is not lost during the reload. To reset the state, use hot
// restart instead.
//
// This works for code too, not just values: Most code changes can be
// tested with just a hot reload.
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// TRY THIS: Try changing the color here to a specific color (to
// Colors.amber, perhaps?) and trigger a hot reload to see the AppBar
// change color while the other colors stay the same.
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
//
// TRY THIS: Invoke "debug painting" (choose the "Toggle Debug Paint"
// action in the IDE, or press "p" in the console), to see the
// wireframe for each widget.
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
const Icon(Symbols.abc),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
```
</details>
### Screenshots or Video
Just build issue, not related to UI.
### Logs
<details open><summary>Log Files</summary>
[build.apk.log](https://github.com/user-attachments/files/17442358/build.apk.log)
[build.windows.log](https://github.com/user-attachments/files/17442359/build.windows.log)
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
PS D:\Workspace\flutter_project> flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4317], locale zh-CN)
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[โ] Chrome - develop for the web
[โ] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.5)
[โ] Android Studio (version 2024.2)
[โ] IntelliJ IDEA Ultimate Edition (version 2024.2)
[โ] VS Code (version 1.94.2)
[โ] Connected device (4 available)
[โ] Network resources
โข No issues found!
```
</details>
| tool,platform-windows,a: build,has reproducible steps,P2,team-tool,triaged-tool,found in release: 3.24,found in release: 3.27 | low | Critical |
2,598,689,476 | material-ui | [material-ui][ButtonBase] Allow disabling the ripple when right-clicked (or other buttons) | ### Summary
In most cases, a Button doesn't really do something when it's clicked with middle or right buttons. However currently the ripple will still show when clicking a button with any button. I don't think this is really a good idea, since it will mislead the user.
My idea is to add a property similar to 'disableTouchRipple' to all components that have a ripple, but it allows the developer to control which mouse button will let the ripple will be displayed. For example,
```tsx
// More about mouse button is here: https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent/button#value
<Button showTouchRippleOnButton={[0, 3, 4]} {...other}/>
```
And the default value of this property should be [0], which means only the main button (usually the left button) will trigger ripples. This property enables the developer to control it, cause sometimes one may listen to, for example, right-click events. In this scenario, this gives the full-control to adapt different using cases.
### Examples
You can also define an enum for MouseButtons, or a string-union. In our projects we use an enum to represent mouse buttons like this, and you guys can use it as long as you like:
```ts
/**
* The mouse buttons. This is a subset of the `MouseEvent.button` values.
*
* @remarks buttons may be configured differently to the standard "left to right" layout.
* A mouse configured for left-handed use may have the button actions reversed.
* Some pointing devices only have one button and use keyboard or other input mechanisms to indicate main,
* secondary, auxiliary, etc. Others may have many buttons mapped to different functions and button values.
*
* @link https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent/button#value
*/
export enum MouseButton
{
/** Main button, usually the left button or the un-initialized state */
Main = 0,
/** Auxiliary button, usually the wheel button or the middle button (if present) */
Auxiliary = 1,
/** Secondary button, usually the right button */
Secondary = 2,
/** Fourth button, typically the Browser Back button */
Fourth = 3,
/** Fifth button, typically the Browser Forward button */
Fifth = 4
}
```
### Motivation
This idea is really important for our projects. If this can be added, we'll be so happy and appreciated.
And when we have some income, we'll considering buying you guys a cup of coffee by donating.
Thanks for your work!
**Search keywords**: ripple, right, button, buttonbase, mouse, click | new feature,waiting for ๐,component: button,component: ButtonBase,package: material-ui | low | Major |
2,598,721,607 | ollama | Windows ARM64 fails when loading model, error code 0xc000001d | ### What is the issue?
I installed the latest Ollama for Windows (ARM64 build) on my 2023 Windows Dev Kit, which has an 8-core ARM processor, a Snapdragon 8cx Gen 3. It's running Windows 11 Pro.
I can pull models, but when I go to run them, I get an error. It doesn't matter what model I run, I've tried several. Here's an example.
```
C:\Users\Mike Chambers>ollama pull gemma2:2b
pulling manifest
pulling 7462734796d6... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.6 GB
pulling e0a42594d802... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 358 B
pulling 097a36493f71... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 8.4 KB
pulling 2490e7468436... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 65 B
pulling e18ad7af7efb... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 487 B
verifying sha256 digest
writing manifest
success
C:\Users\Mike Chambers>ollama run gemma2:2b
Error: llama runner process has terminated: exit status 0xc000001d
C:\Users\Mike Chambers>
```
### OS
Windows
### GPU
_No response_
### CPU
Other
### Ollama version
0.3.13
[server.log](https://github.com/user-attachments/files/17448637/server.log) | bug,windows | low | Critical |
2,598,745,746 | terminal | overwritting text | ### Windows Terminal version
1.22.2912.0
### Windows build number
10.0.22631.0
### Other Software
OpenSSH_for_Windows_9.5p1
### Steps to reproduce
You can repoduce this bug by opening an the preview version of Windows Terminal. I usually have default profile for pwsh, and once inside of terminal open up an ssh session to a remote windows computer with sshd configured. With the preview version of Windows Terminal, when you type something inside of the ssh session overwrites the text over the current folder i.e. `PS C:\Users\Somebody`. When switching back to the non-preview version of Windows Terminal and repeating the same steps everything works normally.
### Expected Behavior
I expect the terminal not to overwrite to behave in a way it doesn't overwrite current folder i.e. `PS C:\Users\Somebody` part. Presently, it makes it really hard to execute any commands, especially because pwsh also doesn't support clearing the console buffer (some error about invalid cursor handle, I think this implies it's not using vt escape sequences or terminal is not translating these commands).
### Actual Behavior
When you ssd into an windows machine and for example when you type the clear command, it displays on the screen as `clear\Users\Somebody>`. It doesn't matter what environment you sshd into cmd, powershell, pwsh all had the same problems of overwriting. | Product-Conpty,Area-VT,Issue-Bug | low | Critical |
2,598,755,679 | tauri | [bug] IME window position appears out of input/textarea (cannot inline-input) on Tauri v2 apps in Linux | ### Describe the bug
Typing in input or textarea with Japanese IME enabled, IME window appears out of the input/textarea.
This is different from #8264 or #5986
- This issue appears only Tauri v2 apps (v1 apps do not have this issue)
- IME window in this issue has both typing area and candidate selection (in previous issues, candidates selection is out of elements)
Screenshot below. IME window (typed "ใใใ") is placed far right-bottom of app window, overlaying on month display of the screen.

Additional information:
- IMEs : Fcitx/Mozc and iBus/Mozc (Mozc is open-source version of Google Japanese IME)
- OS : tested on Linux Mint 21 (based on Ubuntu 22.04) and 22 (based on Ubuntu 24.04)
### Reproduction
npm create tauri-app@latest
(choosing npm/Vanilla/TypeScript)
cd (projectname)
npm i
npm run tauri dev
type in input element with Japanese IME enabled
### Expected behavior
typed letters are shown in input element
### Full `tauri info` output
```text
[โ] Environment
- OS: Linux Mint 21.3.0 x86_64 (X64)
โ webkit2gtk-4.1: 2.44.3
โ rsvg2: 2.52.5
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 20.18.0
- npm: 10.8.2
[-] Packages
- tauri ๐ฆ: 2.0.4
- tauri-build ๐ฆ: 2.0.1
- wry ๐ฆ: 0.46.2
- tao ๐ฆ: 0.30.3
- @tauri-apps/api ๎: 2.0.2
- @tauri-apps/cli ๎: 2.0.3
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.0.1
- @tauri-apps/plugin-shell ๎: 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: Linux,status: needs triage | low | Critical |
2,598,756,987 | godot | Android editor crash upon loading big project | ### Tested versions
Reproducible in 4.4 dev. #42224 ( perhanps in 4.3 too since upon restarting game after playing it, it crashes)
### System information
Sansung s23+ Vulkan Mobile
### Issue description
It seems that upon when i open a big project in godot master branch after the last dev version( dev3) it seems that while it loads the project at the last instance when like it loads editor layout it crashes in big projects like tps demo( in which before at 4.3 it didn't happen at all)( might post a video later, running out of battery)
Edit: video
https://github.com/user-attachments/assets/d9e3bcb0-b1da-4ab0-9323-17a58a8d9d8b
And weirdly enough it also kind of happens in 4.4 dev 3 , but not on the same way as there it can be reproduced by first playing the game and after that minizing the window and then clicking on the restart game button it crashes too.
Edit: other one that showcashes this crash on dev 3
https://github.com/user-attachments/assets/61f3dfd2-8c9e-4439-852b-e486c5d0a6e9
### Steps to reproduce
โข Open the roject linked below at mrp section
โข Load it ( which at the end crashes on android)
โข If you can't reproduce , then maybr try going by playing game and after that restart , which might trigger another crash
### Minimal reproduction project (MRP)
Not small., but it reproducee it.
https://drive.google.com/file/d/1wg1PLTlc6pZs-HzFzQEGAkN3Oo7sqAfB/view?usp=sharing | bug,platform:android,topic:editor,needs testing,crash,regression | low | Critical |
2,598,773,739 | ui | Old blocks are not displaying on the website. | ### Feature description
Old blocks are not displaying on the website. Please add pagination for the blocks
### Affected component/components
https://ui.shadcn.com/blocks
### Additional Context
_No response_
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | medium | Major |
2,598,831,970 | yt-dlp | Feature: OAuth client with cache process | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Hey everyone!
## The problem
Recently I've noticed a PR about YouTube OAuth2 (#11001). This reminds me that I have an unfinished extractor about Sheeta with OAuth support (#9978). Writing OAuth stuff in procedural programming way [^1] makes me headache as the whole thing about OAuth looks "stateful", but implementing an extractor-exclusive OAuth client class in each single .py file to hold status looks overkill. So, that's a bit awkward.
Besides the authorization status, we also need to manually handle things about yt-dlp. The first thing that comes to my mind is the cache. In web browsers the "refresh token" is stored in localStorage, while in yt-dlp we can only use the cache to transfer data across multiple launches. Don't forget that the token is short-term valid (e.g., 300 seconds for Sheeta), so we have to drop cache in time.
## How to improve
Providing universal ways:
- The `InfoExtractor` class has an integrate OAuth client.
- The "_utils.py" file has a helper class about OAuth.
- Add a dependency library about OAuth and some helper functions to process cache things.
- ...
## Other things
You can search `grant_type` [^2] in the repository to see how many extractors are using OAuth.
For now I don't know the difference between OAuth, OAuth1 and OAuth2, so I use the "OAuth" word in the title.
Please correct me if you find anything wrong.
[^1]: Procedural programming - Wikipedia | https://en.wikipedia.org/wiki/Procedural_programming
[^2]: Application Grant Types | https://auth0.com/docs/get-started/applications/application-grant-types
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | enhancement,triage,core:extractor | low | Critical |
2,598,879,242 | ollama | Running out of memory when allocating to second GPU | ### What is the issue?
No issues with any model that fits into a single 3090 but seems to run out of memory when trying to distribute to the second 3090.
```
INFO [wmain] starting c++ runner | tid="33768" timestamp=1729324300
INFO [wmain] build info | build=3670 commit="aad7f071" tid="33768" timestamp=1729324300
INFO [wmain] system info | n_threads=20 n_threads_batch=20 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="33768" timestamp=1729324300 total_threads=28
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="27" port="56651" tid="33768" timestamp=1729324300
llama_model_loader: loaded meta data with 41 key-value pairs and 724 tensors from C:\Users\Joshua\.ollama\models\blobs\sha256-001c9aacecbdca348f7c7c6d2b1a4120d447bf023afcacb3b864df023f1e2be4 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Llama 3.1 70B Instruct
llama_model_loader: - kv 3: general.organization str = Meta Llama
llama_model_loader: - kv 4: general.finetune str = Instruct
llama_model_loader: - kv 5: general.basename str = Llama-3.1
llama_model_loader: - kv 6: general.size_label str = 70B
llama_model_loader: - kv 7: general.license str = llama3.1
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Llama 3.1 70B Instruct
llama_model_loader: - kv 10: general.base_model.0.organization str = Meta Llama
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv 12: general.tags arr[str,3] = ["nvidia", "llama3.1", "text-generati...
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: general.datasets arr[str,1] = ["nvidia/HelpSteer2"]
llama_model_loader: - kv 15: llama.block_count u32 = 80
llama_model_loader: - kv 16: llama.context_length u32 = 131072
llama_model_loader: - kv 17: llama.embedding_length u32 = 8192
llama_model_loader: - kv 18: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 19: llama.attention.head_count u32 = 64
llama_model_loader: - kv 20: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 21: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 22: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 23: llama.attention.key_length u32 = 128
llama_model_loader: - kv 24: llama.attention.value_length u32 = 128
llama_model_loader: - kv 25: general.file_type u32 = 13
llama_model_loader: - kv 26: llama.vocab_size u32 = 128256
llama_model_loader: - kv 27: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 29: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 32: tokenizer.ggml.merges arr[str,280147] = ["ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "...
llama_model_loader: - kv 33: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 34: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 35: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 36: general.quantization_version u32 = 2
llama_model_loader: - kv 37: quantize.imatrix.file str = /models_out/Llama-3.1-Nemotron-70B-In...
llama_model_loader: - kv 38: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
llama_model_loader: - kv 39: quantize.imatrix.entries_count i32 = 560
llama_model_loader: - kv 40: quantize.imatrix.chunks_count i32 = 125
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q3_K: 321 tensors
llama_model_loader: - type q5_K: 240 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-10-19T15:51:40.427+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q3_K - Large
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 34.58 GiB (4.21 BPW)
llm_load_print_meta: general.name = Llama 3.1 70B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'ร'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 1.02 MiB
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 430.55 MiB
llm_load_tensors: CUDA0 buffer size = 17507.01 MiB
llm_load_tensors: CUDA1 buffer size = 17474.99 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1312.00 MiB on device 0: cudaMalloc failed: out of memory
llama_kv_cache_init: failed to allocate buffer for kv cache
llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache
llama_init_from_gpt_params: error: failed to create context with model 'C:\Users\Joshua\.ollama\models\blobs\sha256-001c9aacecbdca348f7c7c6d2b1a4120d447bf023afcacb3b864df023f1e2be4'
ERROR [load_model] unable to load model | model="C:\\Users\\Joshua\\.ollama\\models\\blobs\\sha256-001c9aacecbdca348f7c7c6d2b1a4120d447bf023afcacb3b864df023f1e2be4" tid="33768" timestamp=1729324312
time=2024-10-19T15:51:53.175+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2024-10-19T15:51:55.231+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2024-10-19T15:51:55.734+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error:failed to create context with model 'C:\\Users\\Joshua\\.ollama\\models\\blobs\\sha256-001c9aacecbdca348f7c7c6d2b1a4120d447bf023afcacb3b864df023f1e2be4'"
[GIN] 2024/10/19 - 15:51:55 | 500 | 15.6142405s | 127.0.0.1 | POST "/api/generate"
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.13 | bug,memory | low | Critical |
2,598,958,998 | tensorflow | [Incorrect Result] `tf.math.reciprocal` returns `NaN` on `inf` input on Linux. | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf 2.17.0
### Custom code
No
### OS platform and distribution
AlmaLinux 9.4
### Mobile device
_No response_
### Python version
3.9
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
`tf.math.reciprocal` returns `NaN` on Linux when input is `inf` or `-inf` and has dtype=complex128, shape >= 2.
The output is expected to be 0, since:
1. This behavior is not consistent with dtype=float64, where the output will be 0.
2. When input tensor contains only one value, the output will be 0.
3. The same code snippet will return different result on macOS, where the output is also 0.
### Standalone code to reproduce the issue
```shell
import numpy as np
import tensorflow as tf
input = tf.constant(np.inf, dtype=tf.float64)
out = tf.math.reciprocal(input)
# tf.Tensor(0.0, shape=(), dtype=float64)
print(out)
input = tf.constant(np.inf, dtype=tf.complex128)
out = tf.math.reciprocal(input)
# tf.Tensor(0j, shape=(), dtype=complex128)
print(out)
input = tf.constant([np.inf, np.inf], dtype=tf.complex128)
out = tf.math.reciprocal(input)
# On Linux: tf.Tensor([nan+nanj nan+nanj], shape=(2,), dtype=complex128)
# On macOS: tf.Tensor([0.+0.j 0.+0.j], shape=(2,), dtype=complex128)
print(out)
```
### Relevant log output
```shell
AttributeError: module 'ml_dtypes' has no attribute 'float8_e3m4'
tf.Tensor(0.0, shape=(), dtype=float64)
tf.Tensor(0j, shape=(), dtype=complex128)
tf.Tensor([nan+nanj nan+nanj], shape=(2,), dtype=complex128)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,598,963,730 | excalidraw | flowChart new node is not same as the starting node | i belive the video explain the issue very well
https://github.com/user-attachments/assets/bce54ceb-be16-474f-bf7b-4eef54d7c1e2
| bug,good first issue | low | Minor |
2,598,997,619 | pytorch | torch 2.5 slower than 2.4.1 ? | ### ๐ Describe the bug
I noticed that the latest stable release 2.5.0 is slower than 2.4.1 when using torch.compile (reduce-overhead), I tried on different machines with a 4090 RTX and it's pretty much the same behavior:
Llama2-7B decoding speed (int4 tinygemm + static cache + compile):
torch 2.4.1: 171-175 tokens/sec
torch 2.5.0: 153-161 tokens/sec
Other kernels than tinygemm are slower too with 2.5.0, so the issue might be mainly coming from `torch.compile`.
Here's the script I used: https://gist.github.com/mobicham/aa1e77689d9cf866cbea2cb75a53a9e4
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7B13 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3539.7939
CPU min MHz: 1500.0000
BogoMIPS: 4500.04
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxs
r_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe
popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core p
erfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a
rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinv
d amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip p
ku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] optree==0.11.0
[pip3] torch==2.4.1
[pip3] torchaudio==2.3.1
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.5.39 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.6.39 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.4.1 pypi_0 pypi
[conda] torchaudio 2.3.1 py310_cu121 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu | module: performance,triaged,oncall: pt2 | low | Critical |
2,598,998,064 | storybook | [Bug]: Storybook telemetry is not disabled | ### Describe the bug
I have:
```javascript
core: {
disableTelemetry: true,
},
```
and
```shell
STORYBOOK_DISABLE_TELEMETRY=1 storybook dev -p 6006
```
So why Storybook is trying to connect to:
apex-loadbalancer.netlify.app TCP 443 52.58.254.253 ec2-52-58-254-253.eu-central-1.compute.amazonaws.com
### Reproduction link
.
### Reproduction steps
_No response_
### System
storybook v8.3.5
### Additional context
_No response_ | bug,cli,telemetry | low | Critical |
2,599,000,120 | ant-design | Virtual table can't be scrolled horizontally after attempting to use browser swipe to go back/forward | ### Reproduction link
[https://ant.design/components/table#table-demo-virtual-list](https://ant.design/components/table#table-demo-virtual-list)
### Steps to reproduce
1. Hover over the virtual table and scroll right and left
2. Hover above the table and try to scroll to left, browser go back arrow should be visible.
3. Dismiss going back to the previous page
4. Try to scroll the virtual table
Attaching screen recording: https://vimeo.com/1021226504?share=copy#t=0
### What is expected?
Horizontal scroll should work as expected and scroll to both sides
### What is actually happening?
Horizontal scroll is stuck and it scrolls only by 1 px at a time triggering browser back/forward page navigation
| Environment | Info |
| --- | --- |
| antd | 5.21.4 |
| React | react 18.2.0 |
| System | MacOS 14.2.1 (23C71) |
| Browser | Chrome 128.0.6613.84 (Official Build) (arm64) |
---
Seems like this issue is intermittent and not always happens, but when it's stuck nothing helps.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Major |
2,599,107,112 | pytorch | torch::distributions module for C++ frontend API | ### ๐ The feature, motivation and pitch
It seems that torch.distribution is only support in pytorch, but not for libtorch, the C++ frontend. And some C++ applications need to sample data from the distributions. So the implementation of the torch.distributions is helpful.
For example,
https://discuss.pytorch.org/t/how-to-create-multivariate-normal-samples-in-c-given-mean-vector-and-covariance-matrix/123960
https://discuss.pytorch.org/t/how-to-use-torch-distributions-gumbel-in-c-api/172368
https://discuss.pytorch.org/t/how-to-create-multivariate-normal-samples-in-c-given-mean-vector-and-covariance-matrix/123960
https://discuss.pytorch.org/t/torch-distributions-categorical/45747
https://discuss.pytorch.org/t/is-there-an-equivalent-of-torch-distributions-multivariatenormal-in-libtorch-the-c-api-for-pytorch/211784/2
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser | module: cpp,triaged | low | Minor |
2,599,120,931 | deno | `pack` command to build a clean tar archive without publishing | As per the title. I note that Deno does not have an equivalent for the [`npm pack`](https://docs.npmjs.com/cli/v10/commands/npm-pack) command where one can build a distribution-ready archive without actually publishing to any particular registry. | cli,suggestion,publish | low | Minor |
2,599,142,907 | PowerToys | Mouse Pointer Crosshairs: max GPU usage in games | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Mouse Utilities
### Steps to reproduce
1. Start a fullscreen/borderless fullscreen game (tested in Ravenswatch and New World).
2. Activate Mouse Pointer Crosshairs using activation shortcut.
3. Move cursor around and watch GPU usage.
(AMD Radeon RX 6900XT)
### โ๏ธ Expected Behavior
Negligible affect on GPU in games, akin to something like YoloMouse.
### โ Actual Behavior
- Just activating the crosshair in a game puts strain on the GPU.
- If moving the cursor at all, the GPU jumps to 100% and stays there as long as the cursor moves.
### Other Software
Games tested:
Ravenswatch
New World | Issue-Bug,Needs-Triage | low | Minor |
2,599,148,452 | tensorflow | Multi-threaded execution throws an exception (using GPU). | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
2.19.0-dev20241018
### Custom code
Yes
### OS platform and distribution
Ubuntu 24.04
### Mobile device
_No response_
### Python version
3.9
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Multi-threaded execution throws an exception (using GPU).
### Standalone code to reproduce the issue
```shell
import concurrent
import numpy as np
import tensorflow as tf
print(tf.__version__)
executor = concurrent.futures.ThreadPoolExecutor()
def sum(x, axis):
return tf.reduce_sum(x, axis=axis)
futures = []
for i in range(1000):
futures.clear()
for _ in range(4):
x = tf.convert_to_tensor(np.random.rand(100, 100))
futures.append(executor.submit(sum, x, 1))
x = tf.convert_to_tensor(np.random.rand(100))
futures.append(executor.submit(sum, x, 0))
concurrent.futures.wait(
futures, return_when=concurrent.futures.ALL_COMPLETED
)
[future.result() for future in futures]
```
### Relevant log output
```shell
W tensorflow/core/framework/op_kernel.cc:1840] OP_REQUIRES failed at reduction_ops_common.h:147 : INVALID_ARGUMENT: Invalid reduction dimension (1 for input with 1 dimension(s)
I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: INVALID_ARGUMENT: Invalid reduction dimension (1 for input with 1 dimension(s)
```
```
| type:bug,comp:gpu | low | Critical |
2,599,156,082 | next.js | Custom Server Not Bundled Correctly with Standalone Build, ESBuild, or Bun | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/determined-shaw-nz95jm
### To Reproduce
1. Create a new Next.js application with a custom server, or use yours if you already have one.
```bash
npx create-next-app --example custom-server custom-server-app
```
2. Try to bundle the server using the following approaches:
**Standalone Build:**
```js
// next.config.js
{
output: "standalone"
}
```
While the build succeeds, it does not support custom servers properly, as mentioned in [this existing discussion](https://github.com/vercel/next.js/discussions/34599). As a result, the standalone build ignores the custom server, preventing full bundling.
**ESBuild:**
```bash
npx esbuild server.js --bundle --platform=node --log-limit=0 --log-level=error
```
This approach fails with dependency handling issues. Please find the logs here:
**[esbuild.log](https://github.com/user-attachments/files/17445140/esbuild.log)**
**Bun:**
```bash
npx bun build server.ts --outdir out/ --target node
```
Similar to ESBuild, this fails with dependency handling issues. Here is the output log:
**[bun.log](https://github.com/user-attachments/files/17445142/bun.log)**
Here are some of the common errors you can find in the logs linked above:
- `Could not resolve: "react-dom/server.edge"`
- `Could not resolve: "critters"`
- `Could not resolve: "react-server-dom-turbopack/client.edge"`
- `Could not resolve: "react-server-dom-webpack/client.edge"`
- `Could not resolve: "react-server-dom-webpack/server.node"`
### Current vs. Expected behavior
### Current Behavior:
**Bundlers or Build Tools (e.g. ESBuild and Bun):** Fail due to dependency resolution issues. Likely related to how Next.js handles or manifests its internal dependencies or experimental forks of libraries.
**Standalone build:** Does not support bundling of custom servers and ignores them during the build process.
### Expected Behavior:
The build system should either:
- Enable external bundling tools like ESBuild and Bun to properly handle the custom server and all dependencies without failure.
- Support custom servers natively in the standalone build (i.e., correctly bundle custom servers along with the app).
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:17:33 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6031
Available memory (MB): 65536
Available CPU cores: 16
Binaries:
Node: 20.18.0
npm: 10.8.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.15 // Latest available version is detected (14.2.15).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 4.9.5
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure, Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
Bundling the entire application, including custom servers, is essential for production environments, especially when optimizing deployment artifacts such as Docker images. Proper bundling would allow removing the `node_modules` folder, greatly reducing the image size. | bug,Output (export/standalone) | low | Critical |
2,599,183,625 | PowerToys | Keyborad Manager settings are not applied in browser | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub, Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
Change A to B in Keyborad Manager
### โ๏ธ Expected Behavior
When I type A in the browser, B appears
### โ Actual Behavior
It is replaced without any problem except when inputting in a browser, but it is not replaced in Chrome or Edge.
However, it is very rarely replaced and the behavior is unstable.
What I really want to set is to change ' to IME KANJI.
This event was confirmed on a Windows 11 computer.
I don't think this kind of issue occurs on Windows 10, but is this a browser problem?
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,599,224,496 | vscode | Inlay hint flicker when adding/removing whitespace around it | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: VS Code 1.94.2 (Also reproduced on VSCodium 1.94.2)
- OS Version: Fedora Linux 40 (Workstation Edition), Gnome (46), Wayland
Steps to Reproduce:
1. Create a `test.ts` file
2. Enable typescript.inlayHints.functionLikeReturnTypes
3. Write:
```
function foo(){
return Date.now();
}
```
4. Change to: (adding a whitespace between `foo()` and `{`)
```
function foo() {
return Date.now();
}
```
5. During the change you will see a small flicker of the inlay hint: going from ` : number ` to ` : numbe...` to ` : number` again.
This flickering behavior isn't very elegant, and also happens with other languages, for example with Python (and the basedpyright extension, see https://github.com/DetachHead/basedpyright/issues/794) | bug,inlay-hints | low | Critical |
2,599,231,326 | Python | topological sort returns reversed list | ### Repository commit
03a42510b01c574292ca9c6525cbf0572ff5a2a5
### Python version (python --version)
Python 3.10.12
### Dependencies version (pip freeze)
affine==2.4.0
anyio==4.0.0
appdirs==1.4.4
apturl==0.5.2
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==2.4.0
async-lru==2.0.4
attrs==23.1.0
Babel==2.13.0
backcall==0.2.0
bcrypt==3.2.0
beautifulsoup4==4.12.2
beniget==0.4.1
bleach==6.1.0
blinker==1.4
Brlapi==0.8.3
Brotli==1.0.9
certifi==2020.6.20
cffi==1.16.0
chardet==4.0.0
charset-normalizer==3.3.0
click==8.0.3
click-plugins==1.1.1
cligj==0.7.2
colorama==0.4.4
comm==0.1.4
command-not-found==0.3
cryptography==3.4.8
cupshelpers==1.0
cycler==0.11.0
dbus-python==1.2.18
debugpy==1.8.0
decorator==5.1.1
defer==1.0.6
defusedxml==0.7.1
distro==1.7.0
distro-info==1.1+ubuntu0.2
docopt==0.6.2
duplicity==0.8.21
earthpy==0.9.4
exceptiongroup==1.1.3
executing==2.0.0
fasteners==0.14.1
fastjsonschema==2.18.1
fiona==1.9.5
fonttools==4.29.1
fqdn==1.5.1
fs==2.4.12
future==0.18.2
gast==0.5.2
GDAL==3.4.1
geopandas==0.14.1
html5lib==1.1
httplib2==0.20.2
idna==3.3
imageio==2.32.0
importlib-metadata==4.6.4
ipykernel==6.25.2
ipython==8.16.1
ipython-genutils==0.2.0
ipywidgets==8.1.1
isoduration==20.11.0
jedi==0.19.1
jeepney==0.7.1
Jinja2==3.1.2
joblib==1.3.2
json5==0.9.14
jsonpointer==2.4
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-events==0.7.0
jupyter-lsp==2.2.0
jupyter_client==8.3.1
jupyter_core==5.3.2
jupyter_server==2.7.3
jupyter_server_terminals==0.4.4
jupyterlab==4.0.6
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.9
jupyterlab_server==2.25.0
keyring==23.5.0
kiwisolver==1.3.2
language-selector==0.1
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
lazy_loader==0.3
Levenshtein==0.23.0
lockfile==0.12.2
louis==3.20.0
lxml==4.8.0
lz4==3.1.3+dfsg
macaroonbakery==1.3.1
Mako==1.1.3
MarkupSafe==2.0.1
matplotlib==3.5.1
matplotlib-inline==0.1.6
mistune==3.0.2
monotonic==1.6
more-itertools==8.10.0
mpmath==0.0.0
mypy==0.942
mypy-extensions==0.4.3
nbclient==0.8.0
nbconvert==7.9.2
nbformat==5.9.2
nest-asyncio==1.5.8
netifaces==0.11.0
networkx==3.2.1
notebook==7.0.4
notebook_shim==0.2.3
num2words==0.5.13
numpy==1.26.1
oauthlib==3.2.0
olefile==0.46
overrides==7.4.0
OWSLib==0.25.0
packaging==23.2
pandas==2.0.3
pandocfilters==1.5.0
paramiko==2.9.3
parso==0.8.3
pbr==5.8.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.0.1
pip==22.0.2
platformdirs==3.11.0
plotly==5.4.0
ply==3.11
prometheus-client==0.17.1
prompt-toolkit==3.0.39
protobuf==3.12.4
psutil==5.9.5
psycopg2==2.9.2
ptyprocess==0.7.0
pure-eval==0.2.2
pycairo==1.20.1
pycparser==2.21
pycups==2.0.1
Pygments==2.16.1
PyGObject==3.42.1
PyJWT==2.3.0
pymacaroons==0.13.0
PyNaCl==1.5.0
pyparsing==2.4.7
pyproj==3.3.0
PyQt5==5.15.6
PyQt5-sip==12.9.1
pyRFC3339==1.1
pyrsistent==0.18.1
python-apt==2.4.0+ubuntu4
python-dateutil==2.8.2
python-debian==0.1.43+ubuntu1.1
python-json-logger==2.0.7
python-Levenshtein==0.23.0
pythran==0.10.0
pytz==2022.1
pyxdg==0.27
PyYAML==5.4.1
pyzmq==25.1.1
QScintilla==2.11.6
qtconsole==5.4.4
QtPy==2.4.0
rapidfuzz==3.5.2
rasterio==1.3.9
referencing==0.30.2
reportlab==3.6.8
requests==2.31.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rioxarray==0.15.0
rpds-py==0.10.4
ruff==0.1.1
scikit-image==0.22.0
scikit-learn==1.3.2
scipy==1.11.3
seaborn==0.13.0
SecretStorage==3.3.1
Send2Trash==1.8.2
setuptools==59.6.0
shapely==2.0.2
six==1.16.0
sniffio==1.3.0
snuggs==1.4.7
soupsieve==2.5
stack-data==0.6.3
sympy==1.9
systemd-python==234
tenacity==6.3.1
terminado==0.17.1
thefuzz==0.20.0
threadpoolctl==3.2.0
tifffile==2023.9.26
tinycss2==1.2.1
tomli==2.0.1
tornado==6.3.3
traitlets==5.11.2
typed-ast==1.4.3
types-aiofiles==0.1
types-annoy==1.17
types-appdirs==1.4
types-atomicwrites==1.4
types-aws-xray-sdk==2.8
types-babel==2.9
types-backports-abc==0.5
types-backports.ssl-match-hostname==3.7
types-beautifulsoup4==4.10
types-bleach==4.1
types-boto==2.49
types-braintree==4.11
types-cachetools==4.2
types-caldav==0.8
types-certifi==2020.4
types-characteristic==14.3
types-chardet==4.0
types-click==7.1
types-click-spinner==0.1
types-colorama==0.4
types-commonmark==0.9
types-contextvars==0.1
types-croniter==1.0
types-cryptography==3.3
types-dataclasses==0.1
types-dateparser==1.0
types-DateTimeRange==0.1
types-decorator==0.1
types-Deprecated==1.2
types-docopt==0.6
types-docutils==0.17
types-editdistance==0.5
types-emoji==1.2
types-entrypoints==0.3
types-enum34==1.1
types-filelock==3.2
types-first==2.0
types-Flask==1.1
types-freezegun==1.1
types-frozendict==0.1
types-futures==3.3
types-html5lib==1.1
types-httplib2==0.19
types-humanfriendly==9.2
types-ipaddress==1.0
types-itsdangerous==1.1
types-JACK-Client==0.1
types-Jinja2==2.11
types-jmespath==0.10
types-jsonschema==3.2
types-Markdown==3.3
types-MarkupSafe==1.1
types-mock==4.0
types-mypy-extensions==0.4
types-mysqlclient==2.0
types-oauthlib==3.1
types-orjson==3.6
types-paramiko==2.7
types-Pillow==8.3
types-polib==1.1
types-prettytable==2.1
types-protobuf==3.17
types-psutil==5.8
types-psycopg2==2.9
types-pyaudio==0.2
types-pycurl==0.1
types-pyfarmhash==0.2
types-Pygments==2.9
types-PyMySQL==1.0
types-pyOpenSSL==20.0
types-pyRFC3339==0.1
types-pysftp==0.2
types-pytest-lazy-fixture==0.6
types-python-dateutil==2.8.19.14
types-python-gflags==3.1
types-python-nmap==0.6
types-python-slugify==5.0
types-pytz==2021.1
types-pyvmomi==7.0
types-PyYAML==5.4
types-redis==3.5
types-requests==2.25
types-retry==0.9
types-selenium==3.141
types-Send2Trash==1.8
types-setuptools==57.4
types-simplejson==3.17
types-singledispatch==3.7
types-six==1.16
types-slumber==0.7
types-stripe==2.59
types-tabulate==0.8
types-termcolor==1.1
types-toml==0.10
types-toposort==1.6
types-ttkthemes==3.2
types-typed-ast==1.4
types-tzlocal==0.1
types-ujson==0.1
types-vobject==0.9
types-waitress==0.1
types-Werkzeug==1.0
types-xxhash==2.0
typing_extensions==4.8.0
tzdata==2023.3
ubuntu-drivers-common==0.0.0
ubuntu-pro-client==8001
ufoLib2==0.13.1
ufw==0.36.1
unattended-upgrades==0.1
unicodedata2==14.0.0
uri-template==1.3.0
urllib3==1.26.5
usb-creator==0.3.7
wadllib==1.3.6
wcwidth==0.2.8
webcolors==1.13
webencodings==0.5.1
websocket-client==1.6.3
wheel==0.37.1
widgetsnbextension==4.0.9
word2num==0.1.2
xarray==2023.10.1
xdg==5
xkit==0.0.0
zipp==1.0.0
### Expected behavior
Sample graph

I'd expect the top node to be the first node, and subsequent nodes to be below. A possible (but not unique) output could be ['a', 'b', 'c', 'd', 'e']. It is not 100% clear because there aren't any arrows on the diagram and topological sort is usually on a directed acyclic graph.
### Actual behavior
Output is from the bottom up. ['d', 'c', 'e', 'b', 'a'] | bug | low | Critical |
2,599,231,355 | godot | Editor popup position offset when popped up in area covered by Taskbar on Windows in fullscreen | ### Tested versions
Reproduced in:
- v4.2.2.stable.official [15073afe3]
- v4.3.stable.official [77dcf97d8]
- v4.4.dev3.official [f4af8201b]
### System information
Godot v4.4.dev3 - Windows 10.0.19045 - Multi-window, 3 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (NVIDIA; 32.0.15.6094) - AMD Ryzen 9 5950X 16-Core Processor (32 threads)
### Issue description
In fullscreen mode editor, any popup showing up will avoid the taskbar area as if the Taskbar is still there. This happens both when the taskbar is vertical or horizontal, though it's mainly an issue with vertical taskbar because those frequently used popup menus will be offset.
Edit: The problem goes away in single windowed mode.

### Steps to reproduce
1. Create new project.
2. Switch editor to fullscreen mode with shortcut Shift + F11.
3. Hover over tabs in the bottom dock with the taskbar at the bottom, or click the "Scene" menu in the menu bar with the taskbar at the left.
4. See how tooltips and popup menus are misplaced. A large taskbar will make it obvious.
### Minimal reproduction project (MRP)
N/A | bug,topic:gui | low | Minor |
2,599,289,953 | godot | 4.4.dev3 touchscreenbtn can not trigger pressed callback on android compatibility | ### Tested versions
4.4.dev3 compatibility
### System information
4.4.dev3 compatibility
### Issue description
4.4.dev3 touchscreenbtn can not trigger pressed callback on android

### Steps to reproduce
4.4.dev3 touchscreenbtn can not trigger pressed callback on android
if choose the 4.4.dev2 it's no problem

### Minimal reproduction project (MRP)
4.4.dev3 touchscreenbtn can not trigger pressed callback on android | bug,platform:android,topic:input | low | Minor |
2,599,364,324 | stable-diffusion-webui | [Bug]: When using ControlNet Union with "Low VRAM" checked, I get the error: "Expected all tensors to be on the same device..." (Details in thread.) | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I want to do inpainting, assisted by ControlNet Union (Depth and LineArt) and IPAdapter (FaceID); with the IPAdapter LoRA, and another LoRA for style, alongside SDXL (the model doesn't really matter, because I get it with both, but they are RealVis, and FaeTastic).
I can't load both the instances of Union and IPAdapter, the LoRAs, and SDXL in VRAM, because I run out. So, I usually check the "Low VRAM" option in ControlNet to lower memory usage, and prevent overflowing the graphics memory in the shared space (doesn't crash, but slows down tremendously; not relevant to the issue).
When I have "Low VRAM" checked, the program gives me an error message, and breaks. It doesn't crash, just breaks execution. The Error message is: "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)"
(Details in the error logs.)
With LowVRAM unchecked, the program works.
If there's anything that I've missed, please, let me know. I'll try to provide as much info as I can.
### Steps to reproduce the problem
1. Do anything related to inpainting.
2. Load ControlNet Union with both Depth and Lineart.
3. Preprocess the ControlNet input image, and set it as the input image. Preprocessor doesn't seem relevant - I just preprocess, drag to input, and set preprocessor to "None."
4. Either add an IPAdapter instance and check the "Low VRAM" option, or just check the "Low VRAM" option on either / both ControlNet Union tabs. It breaks either way.
5. Click "Generate" with your favorite prompt.
### What should have happened?
It should just do the inpainting. Very preferably without breaking execution before it's done!
### What browsers do you use to access the UI ?
Microsoft Edge
### Sysinfo
[sysinfo-2024-10-19-16-14.json](https://github.com/user-attachments/files/17446326/sysinfo-2024-10-19-16-14.json)
### Console logs
```Shell
2024-10-19 18:52:19,143 - ControlNet - INFO - unit_separate = False, style_align = False 30/30 [00:44<00:00, 1.43s/it]
2024-10-19 18:52:19,481 - ControlNet - INFO - Loading model: ip-adapter-faceid-plusv2_sdxl [187cb962]
2024-10-19 18:52:20,487 - ControlNet - INFO - Loaded state_dict from [C:\Diffusion\webui\models\ControlNet\ip-adapter-faceid-plusv2_sdxl.bin]
2024-10-19 18:52:28,154 - ControlNet - INFO - ControlNet model ip-adapter-faceid-plusv2_sdxl [187cb962](ControlModelType.IPAdapter) loaded.
2024-10-19 18:52:28,189 - ControlNet - INFO - Using preprocessor: ip-adapter_face_id_plus
2024-10-19 18:52:28,189 - ControlNet - INFO - preprocessor resolution = 1024
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
2024-10-19 18:52:33,648 - ControlNet - WARNING - Insightface: More than one face is detected in the image. Only the biggest one will be used.
2024-10-19 18:53:10,457 - ControlNet - WARNING - Unable to determine version for ControlNet model 'Controlnet--Union [15e6ad5d]'.
2024-10-19 18:53:10,814 - ControlNet - INFO - Loading model: Controlnet--Union [15e6ad5d]
2024-10-19 18:53:10,987 - ControlNet - INFO - Loaded state_dict from [C:\Diffusion\webui\models\ControlNet\Controlnet--Union.safetensors]
2024-10-19 18:53:11,005 - ControlNet - INFO - controlnet_sdxl_config
2024-10-19 18:53:44,786 - ControlNet - INFO - ControlNet model Controlnet--Union [15e6ad5d](ControlModelType.ControlNetUnion) loaded.
2024-10-19 18:53:45,202 - ControlNet - INFO - Using preprocessor: none
2024-10-19 18:53:45,202 - ControlNet - INFO - preprocessor resolution = 1024
2024-10-19 18:53:45,409 - ControlNet - INFO - ControlNetUnion control type: ControlNetUnionControlType.DEPTH
2024-10-19 18:53:45,410 - ControlNet - WARNING - Unable to determine version for ControlNet model 'Controlnet--Union [15e6ad5d]'.
2024-10-19 18:53:45,412 - ControlNet - INFO - Loading model from cache: Controlnet--Union [15e6ad5d]
2024-10-19 18:53:45,620 - ControlNet - INFO - Using preprocessor: none
2024-10-19 18:53:45,621 - ControlNet - INFO - preprocessor resolution = 1024
2024-10-19 18:53:45,647 - ControlNet - INFO - ControlNetUnion control type: ControlNetUnionControlType.HARD_EDGE
2024-10-19 18:53:48,044 - ControlNet - INFO - ControlNet Hooked - Time = 88.91933393478394
0%| | 0/46 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(7gaia0ozp7g9rh2)', <gradio.routes.Request object at 0x0000028736B25DE0>, 2, '3d render, chibi fairy with bun updo, almond-shaped slanted eyes, makeup, looking curiously. <lora:ip-adapter-faceid-plusv2_sdxl_lora:0.45>', '', [], None, None, {'image': <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x28736C6B880>, 'mask': <PIL.Image.Image image mode=RGB size=1024x1024 at 0x28736C68910>}, None, None, None, None, 4, 0, 0, 4, 1, 7, 1.5, 1, 0.0, 1024, 1024, 1, 0, 1, 64, 0, '', '', '', [], False, [], '', 0, False, 1, 0.5, 4, 0, 0.5, 2, 46, 'Restart', 'Automatic', False, '', 0.8, 11628035, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000028736C6BDC0>, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=True, module='ip-adapter_face_id_plus', model='ip-adapter-faceid-plusv2_sdxl [187cb962]', weight=1.0, image={'image': array([[[87, 64, 56],
*** [87, 64, 56],
*** [87, 64, 56],
*** ...,
*** [81, 53, 41],
*** [81, 53, 41],
*** [81, 53, 41]],
***
*** [[87, 64, 56],
*** [87, 64, 56],
*** [87, 64, 56],
*** ...,
*** [81, 53, 41],
*** [81, 53, 41],
*** [81, 53, 41]],
***
*** [[88, 65, 57],
*** [88, 65, 57],
*** [88, 65, 57],
*** ...,
*** [81, 53, 41],
*** [81, 53, 41],
*** [81, 53, 41]],
***
*** ...,
***
*** [[44, 65, 92],
*** [44, 65, 92],
*** [43, 64, 91],
*** ...,
*** [34, 36, 48],
*** [33, 35, 47],
*** [33, 35, 47]],
***
*** [[42, 66, 92],
*** [42, 66, 92],
*** [41, 65, 91],
*** ...,
*** [32, 34, 46],
*** [31, 33, 45],
*** [31, 33, 45]],
***
*** [[42, 66, 92],
*** [42, 66, 92],
*** [41, 65, 91],
*** ...,
*** [30, 32, 44],
*** [29, 31, 43],
*** [29, 31, 43]]], dtype=uint8), 'mask': array([[[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** ...,
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]]], dtype=uint8)}, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=True, processor_res=1024, threshold_a=0.5, threshold_b=0.5, guidance_start=0.0, guidance_end=1.0, pixel_perfect=True, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=True, module='none', model='Controlnet--Union [15e6ad5d]', weight=0.5, image={'image': array([[[ 14, 14, 14],
*** [ 14, 14, 14],
*** [ 14, 14, 14],
*** ...,
*** [ 17, 17, 17],
*** [ 16, 16, 16],
*** [ 15, 15, 15]],
***
*** [[ 14, 14, 14],
*** [ 14, 14, 14],
*** [ 14, 14, 14],
*** ...,
*** [ 17, 17, 17],
*** [ 16, 16, 16],
*** [ 16, 16, 16]],
***
*** [[ 14, 14, 14],
*** [ 14, 14, 14],
*** [ 14, 14, 14],
*** ...,
*** [ 17, 17, 17],
*** [ 17, 17, 17],
*** [ 17, 17, 17]],
***
*** ...,
***
*** [[254, 254, 254],
*** [254, 254, 254],
*** [254, 254, 254],
*** ...,
*** [176, 176, 176],
*** [176, 176, 176],
*** [176, 176, 176]],
***
*** [[254, 254, 254],
*** [254, 254, 254],
*** [254, 254, 254],
*** ...,
*** [176, 176, 176],
*** [176, 176, 176],
*** [176, 176, 176]],
***
*** [[254, 254, 254],
*** [254, 254, 254],
*** [253, 253, 253],
*** ...,
*** [176, 176, 176],
*** [176, 176, 176],
*** [176, 176, 176]]], dtype=uint8), 'mask': array([[[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** ...,
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]]], dtype=uint8)}, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=1024, threshold_a=0.5, threshold_b=0.5, guidance_start=0.0, guidance_end=1.0, pixel_perfect=True, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=True, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.DEPTH: 'Depth'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=True, module='none', model='Controlnet--Union [15e6ad5d]', weight=0.5, image={'image': array([[[ 5, 5, 5],
*** [10, 10, 10],
*** [ 6, 6, 6],
*** ...,
*** [ 2, 2, 2],
*** [ 1, 1, 1],
*** [ 1, 1, 1]],
***
*** [[ 8, 8, 8],
*** [10, 10, 10],
*** [ 5, 5, 5],
*** ...,
*** [ 1, 1, 1],
*** [ 1, 1, 1],
*** [ 1, 1, 1]],
***
*** [[ 4, 4, 4],
*** [ 2, 2, 2],
*** [ 3, 3, 3],
*** ...,
*** [ 1, 1, 1],
*** [ 1, 1, 1],
*** [ 1, 1, 1]],
***
*** ...,
***
*** [[ 6, 6, 6],
*** [ 7, 7, 7],
*** [ 4, 4, 4],
*** ...,
*** [ 0, 0, 0],
*** [ 0, 0, 0],
*** [ 0, 0, 0]],
***
*** [[ 1, 1, 1],
*** [ 1, 1, 1],
*** [ 1, 1, 1],
*** ...,
*** [ 0, 0, 0],
*** [ 0, 0, 0],
*** [ 1, 1, 1]],
***
*** [[ 1, 1, 1],
*** [ 1, 1, 1],
*** [ 1, 1, 1],
*** ...,
*** [ 1, 1, 1],
*** [ 1, 1, 1],
*** [ 1, 1, 1]]], dtype=uint8), 'mask': array([[[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** ...,
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]]], dtype=uint8)}, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=1024, threshold_a=0.5, threshold_b=0.5, guidance_start=0.0, guidance_end=1.0, pixel_perfect=True, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=True, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.HARD_EDGE: 'Hard Edge'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), False, '', 0.5, True, False, '', 'Lerp', False, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', True, False, False, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, False, False, 0, 'Range', 1, 'GPU', True, False, False, False, False, 0, 448, False, 448, False, False, 3, False, 3, True, 3, False, 'Horizontal', False, False, 'u2net', False, True, True, False, 0, 2.5, 'polylines_sharp', ['left-right', 'red-cyan-anaglyph'], 2, 0, False, 'โฏboostโฏclipdepthโฏclipdepth_farโฏclipdepth_modeโฏclipdepth_nearโฏcompute_deviceโฏdo_output_depthโฏgen_normalmapโฏgen_rembgโฏgen_simple_meshโฏgen_stereoโฏmodel_typeโฏnet_heightโฏnet_size_matchโฏnet_widthโฏnormalmap_invertโฏnormalmap_post_blurโฏnormalmap_post_blur_kernelโฏnormalmap_pre_blurโฏnormalmap_pre_blur_kernelโฏnormalmap_sobelโฏnormalmap_sobel_kernelโฏoutput_depth_combineโฏoutput_depth_combine_axisโฏoutput_depth_invertโฏpre_depth_background_removalโฏrembg_modelโฏsave_background_removal_masksโฏsave_outputsโฏsimple_mesh_occludeโฏsimple_mesh_sphericalโฏstereo_balanceโฏstereo_divergenceโฏstereo_fill_algoโฏstereo_modesโฏstereo_offset_exponentโฏstereo_separationโฏtiling_mode') {}
Traceback (most recent call last):
File "C:\Diffusion\webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\Diffusion\webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\Diffusion\webui\modules\img2img.py", line 232, in img2img
processed = process_images(p)
File "C:\Diffusion\webui\modules\processing.py", line 845, in process_images
res = process_images_inner(p)
File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\Diffusion\webui\modules\processing.py", line 981, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\hook.py", line 470, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "C:\Diffusion\webui\modules\processing.py", line 1741, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "C:\Diffusion\webui\modules\sd_samplers_kdiffusion.py", line 172, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\Diffusion\webui\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "C:\Diffusion\webui\modules\sd_samplers_kdiffusion.py", line 172, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\Diffusion\webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Diffusion\webui\modules\sd_samplers_extra.py", line 71, in restart_sampler
x = heun_step(x, old_sigma, new_sigma)
File "C:\Diffusion\webui\modules\sd_samplers_extra.py", line 19, in heun_step
denoised = model(x, old_sigma * s_in, **extra_args)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Diffusion\webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Diffusion\webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\Diffusion\webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\Diffusion\webui\modules\sd_models_xl.py", line 44, in apply_model
return self.model(x, t, cond)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Diffusion\webui\modules\sd_hijack_utils.py", line 18, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\Diffusion\webui\modules\sd_hijack_utils.py", line 32, in __call__
return self.__orig_func(*args, **kwargs)
File "C:\Diffusion\webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
return self.diffusion_model(
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\hook.py", line 905, in forward_webui
raise e
File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\hook.py", line 902, in forward_webui
return forward(*args, **kwargs)
File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\hook.py", line 613, in forward
control = param.control_model(
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 32, in forward
return self.control_model(*args, **kwargs)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 370, in forward
emb += self.control_add_embedding(control_type, emb.dtype, emb.device)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\controlnet_core\controlnet_union.py", line 64, in forward
return self.linear_2(torch.nn.functional.silu(self.linear_1(c_type)))
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Diffusion\webui\extensions-builtin\Lora\networks.py", line 503, in network_Linear_forward
return originals.Linear_forward(self, input)
File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
---
```
### Additional information
_No response_ | bug-report | low | Critical |
2,599,365,086 | deno | Unable to deno install --global from local directory when using package.json | Version: Deno 2.0.2
I use Deno to develop a CLI tool. My workflow has been to install it from my local checkout so I can test changes without releasing, publishing & updating via denoland/JSR.
I'm currently in the process of switching to a package.json to make it cross-platform.
However, I am now no longer able to install from a local directory using Deno.
## Problem
Running
```
deno install --global --name my-cli-tool --config deno.json main.ts
```
with main.ts importing package specified in package.json
Results in:
```
error: Relative import path "@std/log" not prefixed with / or ./ or ../ and not in import map from "file:///src/main.ts"
at file:///src/main.ts:6:22
```
## Reproduction
`package.json`
```json
{
"dependencies": {
"@std/log": "npm:@jsr/std__log@^0.224.9",
}
}
```
`main.ts`
```ts
import * as log from '@std/log';
log.info('Hello World');
```
| bug,needs info,install | low | Critical |
2,599,385,682 | node | closeAllConnections on http2.Http2Server and http2.Http2SecureServer | ### What is the problem this feature will solve?
I would like to cleanly shut down a running http2 server that has existing client connections. In http.Server, the closeAllConnections method was added for this purpose, but it doesn't seem to have been included in http2 yet.
### What is the feature you are proposing to solve the problem?
Add the methose closeAllConnections() and closeIdleConnections to http2.Http2Server and http2.Http2SecureServer.
Alternatively, if an AbortSignal is passed into listen(), all connections could be shut down when the signal fires abort.
### What alternatives have you considered?
This code is a workaround:
```js
const socks = new Set();
const server = http2.createSecureServer(...);
server.on('connection', (s: Duplex) => {
socks.add(s);
once('close', () => socks.delete(s));
});
server.on('close', () => {
for (const s of socks) {
s.destroy(); // Fires close event above, which cleans up.
}
});
```
This is likely suboptimal due to my lack of intimate understanding of the http2 implementation. | feature request,http2 | low | Minor |
2,599,419,517 | deno | Specify deno version in `deno.json` | It would be great to have the option to specify the Deno version that a project is compatible with. This information may not necessarily need to be considered by the runtime, but it could be valuable to receive a warning when running `deno install`.
Similar to the [`engines` field of package.json](https://docs.npmjs.com/cli/v10/configuring-npm/package-json#engines)
For instance, issues like these https://github.com/denoland/deno/issues/26413 would be easier to address.
```jsonc
// deno.json
{
// ...
"engine": [
"deno@2.0.0" // to raise a warning on v2.0.2 and 1.46.3
]
// ...
}
``` | suggestion,config | low | Major |
2,599,433,007 | godot | Crash while importing SVGs with image | ### Tested versions
- Not reproducible in v4.2.1.stable.official [b09f793f5]: The project correctly imports the SVG file.
- Reproducible in v4.3.stable.official [77dcf97d8]: Godot crashes while trying to import the SVG file.
- Reproducible in v4.4.dev3.official [f4af8201b] (the latest version): Godot crashes while trying to import the SVG file.
### System information
Windows 11, 32GB RAM 3200 MHz operating on 2666 MT/s, RX 580 8GB, Ryzen 5 1600
### Issue description
Godot crashes while trying to import a SVG file that contains an image.
I think it is possibly because SVGs are supposed to be small files but the image increases a lot the required amount of memory needed to import it.
In the MRP I included two versions of the SVG:
- `hand_with_reference_image.svg`
- `hand_without_reference_image.svg`
They are basically the same but one does not have the reference image.
Observe that the one with the image is 110kb larger than the one who has not.
### Steps to reproduce
- Open MRP with Godot v4.3.stable.official [77dcf97d8] or v4.4.dev3.official [f4af8201b]
- Observe that Godot crashes
- Open MRP with v4.2.1.stable.official [b09f793f5]
- Observe that Godot successfully imports the SVG ignoring the image and the `.godot` folder is updated with SVG importing data
- Open MRP again with Godot v4.3.stable.official [77dcf97d8] or v4.4.dev3.official [f4af8201b]
- Observe that Godot successfully opens the project and SVGs because the `.godot` already exists
- If you delete the `.godot` folder the problem comes back
### Minimal reproduction project (MRP)
[svg-crash-example.zip](https://github.com/user-attachments/files/17446730/svg-crash-example.zip)
| bug,topic:import,crash,regression | low | Critical |
2,599,449,054 | deno | fmt - Produce invalid Svelte code (or panik) when there's a switch case in Svelte's HTML part | `deno fmt --unstable-component` produce invalid Svelte code (or panik) when there's a switch case in Svelte's HTML part
<details>
<summary>Reproduction # 1 (Produce invalid Svelte code)</summary>
Input:
```svelte
<div
class="placeholder {arr.some((ele) => {
switch (ele) {
case 1:
return true
case 2:
return true
case 3:
return true
}
})
? ''
: 'hidden'}"
>
</div>
```
Output:
```svelte
<div
class="placeholder {arr.some((ele) => { switch (ele) { case 1: return true case 2: return true case 3: return true } }) ? '' : 'hidden'}"
>
</div>
```
Deno fmt removes all new lines but doesn't add any semicolons.
(There's no deno.json so I'm pretty sure this is deno fmt's default settings)
</details>
<details>
<summary>Reproduction # 2 (Deno fmt panik)</summary>
Input:
```svelte
<div
class=" {arr.some((ele) => {
switch (ele) {
case 1:
return true
case 2:
return true
case 3:
return true
}
})
? ''
: 'hidden'}"
>
</div>
```
In this case, there's a trailing whitespace at the beginning and deno panik
Logs:
```
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: linux x86_64
Version: 2.0.2
Args: ["deno", "fmt", "--unstable-component", "--watch"]
thread 'tokio-runtime-worker' panicked at cli/tools/fmt.rs:778:13:
Formatting succeeded initially, but failed when ensuring a stable format. This indicates a bug in the formatter where the text it produces is not syntactically correct. As a temporary workaround you can ignore this file (/home/lts20050703/git/test/test.svelte).
Expected ',', got '.' at file:///home/lts20050703/git/test/test.svelte.tsx:2:13
<>{arr.some((ele) => { switch (ele) { case 1: return true case 2: return t...
~
stack backtrace:
0: 0x6449e9552a75 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h1b9dad2a88e955ff
1: 0x6449e95843db - core::fmt::write::h4b5a1270214bc4a7
2: 0x6449e954cd2f - std::io::Write::write_fmt::hd04af345a50c312d
3: 0x6449e9554291 - std::panicking::default_hook::{{closure}}::h96ab15e9936be7ed
4: 0x6449e9553f6c - std::panicking::default_hook::h3cacb9c27561ad33
5: 0x6449e9ba5271 - deno::setup_panic_hook::{{closure}}::hd1640e60cae751ab
6: 0x6449e9554b9f - std::panicking::rust_panic_with_hook::hfe205f6954b2c97b
7: 0x6449e95547c7 - std::panicking::begin_panic_handler::{{closure}}::h6cb44b3a50f28c44
8: 0x6449e9552f39 - std::sys::backtrace::__rust_end_short_backtrace::hf1c1f2a92799bb0e
9: 0x6449e9554454 - rust_begin_unwind
10: 0x6449e9581313 - core::panicking::panic_fmt::h3d8fc78294164da7
11: 0x6449e97e62b0 - tokio::runtime::task::raw::poll::h532ec3edeb01ad5a
12: 0x6449eb7e5bb0 - std::sys::backtrace::__rust_begin_short_backtrace::h9b1417eeeba07a0f
13: 0x6449eb7e69e2 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h17ee1c4af95c0f80
14: 0x6449e955ba9b - std::sys::pal::unix::thread::Thread::new::thread_start::ha8af9c992ef0b208
15: 0x789cfbe9ca94 - start_thread
at ./nptl/pthread_create.c:447:8
16: 0x789cfbf29c3c - __GI___clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
17: 0x0 - <unknown>
```
</details>
<details>
<summary>Reproduction # 3 (Valid code, no error)</summary>
Input:
```svelte
<div
class="{arr.some((ele) => {
switch (ele) {
case 1:
return true
case 2:
return true
case 3:
return true
}
})
? ''
: 'hidden'}"
>
</div>
```
In this case, there's no trailing whitespace, and deno fmt successfully produce valid code
Output:
```svelte
<div
class={arr.some((ele) => {
switch (ele) {
case 1:
return true;
case 2:
return true;
case 3:
return true;
}
})
? ""
: "hidden"}
>
</div>
```
</details> | bug,deno fmt,triage required ๐ | low | Critical |
2,599,466,642 | godot | _init doesn't override a value in inherited scene with @export flag | ### Tested versions
Reproducible in v4.3.stable (77dcf97d8) : _init of inherited scene doesn't override a `@export` variable from the base class.
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 (NVIDIA; 32.0.15.6094) - AMD Ryzen 7 2700 Eight-Core Processor (16 Threads)
### Issue description
The class which inherits from another and tries to update `@export` variable of a parent class can't do it.
I've created two classes. One inherits from another (for e.g. Animal & Dog). Animal has an `@export` variable (e.g. `@export` animal_name: String). Dog has _init function included which tries to update Animal's animal_name. Once I try to preload and then instantiate Dog class somewhere in my project the code from _init inside Dog class executes, but animal_name doesn't update like it should.
Once we delete `@export` flag everything works fine - in this case _init of the class that inherits updates base class variable.
Example in project:
Animal Scene script:
```
class_name Animal extends Node2D
@export var animal_name: String # Once we delete @export everything seems to work fine
@onready var animal_name_label = $AnimalNameLabel # Label in the node tree
func _init() -> void:
print("Animal _init Happens")
animal_name = "Animal"
# Called when the node enters the scene tree for the first time.
func _ready() -> void:
animal_name_label.text = animal_name
print(animal_name)
```
Dog Scene (it inherits from Animal) script:
```
class_name Dog extends Animal
func _init() -> void:
print("Dog _init Happens")
animal_name = "Dog" # If animal_name is @export variable then it doesn't work, otherwise animal_name gets updated
# Called when the node enters the scene tree for the first time.
func _ready() -> void:
super._ready()
pass # Replace with function body.
```
Main scene to which I try to add my Dog scene:
```
extends Node2D
# Called when the node enters the scene tree for the first time.
func _ready() -> void:
var scene = preload("res://Dog.tscn")
var instantiated_scene = scene.instantiate()
add_child(instantiated_scene)
```
Effect:

### Steps to reproduce
1. Create new scene
2. Attach script to created scene, give it a custom class_name and add any variable with `@export` flag
3. Create new inherited scene using Godot Editor (right click on the previously created scene -> New Inherited Scene)
4. Save Inherited Scene with custom name
5. Attach new script to newly created scene
6. Make the second scene extend the first one (e.g. class_name Dog extends Animal)
7. Add _init inside second scene and try to update previously created variable from the base scene
8. Once we will try to programatically add our base scene to the node tree - preload(scene) - and then instantiate it, the value from our base class won't be updated, yet the code from _init in second class (class that inherits) will be executed
### Minimal reproduction project (MRP)
[example_project.zip](https://github.com/user-attachments/files/17446819/example_project.zip)
| discussion,topic:core,topic:gdscript,documentation | low | Minor |
2,599,479,420 | rust | Lint overcapturing in impl Trait on 2024 edition | ### Code
```rust
pub fn check(x: &[u8]) -> impl std::fmt::Display {
x[0]
}
```
### Current output
(empty)
### Desired output
A warning, like one produced by `impl_trait_overcaptures`.
### Rationale and extra context
Currently, `impl_trait_overcaptures` lints code that would be overcapturing in edition = 2024 on editions < 2024. I think it would be valuable to add a (maybe allow-by-default) lint to detect overcapturing in new, edition 2024 code. Code like this is probably not _meant_ to capture lifetime of `x`, so linting against this may help to avoid too-strict signatures. If the function was meant to be stricter than necessary (e.g. for future-compat reasons), you can always just `#[allow]` the lint.
### Rust Version
```
$ rustc --version --verbose
rustc 1.83.0-nightly (eb4e23467 2024-10-09)
binary: rustc
commit-hash: eb4e2346748e1760f74fcaa27b42431e0b95f8f3
commit-date: 2024-10-09
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.1
```
### Anything else?
Alternatively, this could be a Clippy lint. Having it in rustc could probably allow code reuse with the edition lint though.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"compiler-errors"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-enhancement,A-lints,A-diagnostics,T-compiler | low | Critical |
2,599,482,807 | pytorch | `out=` meta device support. | List of operations, whose `out=` variants are not consistent with eager (i.e. run on CPU/CUDA, but fail when using meta devices). I have grouped them according to the error each of them raise.
**No Meta Kernel Registered**
- [ ] `_native_batch_norm_legit`
- [ ] `geqrf`
<details>
<summary>Error Example</summary>
```python
Traceback (most recent call last):
File "examples/ops.py", line 88, in run
f(input_, *args_, **kwargs_, out=out)
File "torch/testing/_internal/opinfo/core.py", line 1169, in __call__
return self.op(*args, **kwargs)
NotImplementedError: aten::_native_batch_norm_legit.out: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add a fake impl. Please see the following for next steps: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/testing/_internal/common_device_type.py", line 1140, in test_wrapper
return test(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1371, in only_fn
return fn(slf, *args, **kwargs)
File "examples/ops.py", line 96, in test_meta_out
raise RuntimeError(f"eager didn't fail, but meta did.") from meta_err
RuntimeError: eager didn't fail, but meta did.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/lib/python3.9/unittest/case.py", line 59, in testPartExecutor
yield
File "/lib/python3.9/unittest/case.py", line 592, in run
self._callTestMethod(testMethod)
File "/lib/python3.9/unittest/case.py", line 550, in _callTestMethod
method()
File "torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 448, in instantiated_test
result = test(self, **param_kwargs)
File "torch/testing/_internal/common_utils.py", line 1530, in wrapper
fn(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 9: SampleInput(input=Tensor[size=(1, 2, 3), device="cuda:0", dtype=torch.float32], args=(None,None,True,0.5,1e-05), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=9 python ops.py TestCommonCUDA.test_meta_out__native_batch_norm_legit_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
**Other Operations**
- [ ] `nanmean`
<details>
<summary>Error Traceback</summary>
```python
Traceback (most recent call last):
File "examples/ops.py", line 88, in run
f(input_, *args_, **kwargs_, out=out)
File "torch/testing/_internal/opinfo/core.py", line 1169, in __call__
return self.op(*args, **kwargs)
RuntimeError: DispatchStub: unsupported device typemeta
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/testing/_internal/common_device_type.py", line 1140, in test_wrapper
return test(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1371, in only_fn
return fn(slf, *args, **kwargs)
File "examples/ops.py", line 96, in test_meta_out
raise RuntimeError(f"eager didn't fail, but meta did.") from meta_err
RuntimeError: eager didn't fail, but meta did.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/lib/python3.9/unittest/case.py", line 59, in testPartExecutor
yield
File "/lib/python3.9/unittest/case.py", line 592, in run
self._callTestMethod(testMethod)
File "/lib/python3.9/unittest/case.py", line 550, in _callTestMethod
method()
File "torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 448, in instantiated_test
result = test(self, **param_kwargs)
File "torch/testing/_internal/common_utils.py", line 1530, in wrapper
fn(*args, **kwargs)
File "torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 34: SampleInput(input=Tensor[size=(2, 2), device="cuda:0", dtype=torch.float32], args=(), kwargs={'dim': '(0,-1)', 'keepdim': 'True'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=34 python ops.py TestCommonCUDA.test_meta_out_nanmean_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
**Dynamic Shape Output**
The operations listed below return tensors of dynamic shape. Which means that it's impossible to know its shape (i.e. implement a meta function) without the actual data.
- ~`linalg_lstsq`~
- ~`masked_select`~
- ~`nonzero`~
## Test Setup
In order to reproduce these results, besides the actual test below, we needed to make `wrapper_set_seed` function a no-op:
```
--- a/torch/testing/_internal/common_methods_invocations.py
+++ b/torch/testing/_internal/common_methods_invocations.py
@@ -40,8 +40,9 @@ from torch.testing._internal.common_utils import (
GRADCHECK_NONDET_TOL, slowTest, TEST_WITH_SLOW,
TEST_WITH_TORCHINDUCTOR
)
-from torch.testing._utils import wrapper_set_seed
+# from torch.testing._utils import wrapper_set_seed
+import torch
import torch._refs as refs # noqa: F401
import torch._refs.nn.functional
import torch._refs.special
@@ -50,6 +51,9 @@ import torch._prims as prims # noqa: F401
from torch.utils import _pytree as pytree
+def wrapper_set_seed(op, *args, **kwargs):
+ return op(*args, **kwargs)
+
from packaging import version
from torch.testing._internal.opinfo.core import ( # noqa: F401
--
2.47.0
```
<details>
<summary>OpInfo Test</summary>
```python
import torch
import torch.utils._pytree as pytree
from torch.testing._internal.common_methods_invocations import op_db
from torch.testing._internal.common_device_type import ops, instantiate_device_type_tests, OpDTypes, onlyCUDA, onlyCPU
from torch.testing._internal.common_utils import TestCase, run_tests
class TestCommon(TestCase):
@ops([op for op in op_db if op.supports_out], allowed_dtypes=(torch.float32,))
def test_dynamo_out(self, device, dtype, op):
samples = list(op.sample_inputs(device, dtype))
for i, sample in enumerate(samples):
torch._dynamo.reset()
input, args, kwargs = (sample.input, sample.args, sample.kwargs)
# Run the functional version of the operation, using eager.
try:
expected = op(input, *args, **kwargs)
if isinstance(expected, tuple):
expected = tuple(expected)
except:
# If that doesn't work out, go to the next sample.
continue
def run(f, dev):
# Create new outputs in the desired device.
out = pytree.tree_map_only(torch.Tensor, lambda t: torch.empty_like(t, device=dev), expected)
# Move inputs to the desired device
stuff = (input, args, kwargs)
stuff = pytree.tree_map_only(torch.Tensor, lambda t: t.to(dev), stuff)
stuff = pytree.tree_map_only(torch.device, lambda d: torch.device(dev), stuff)
stuff = pytree.tree_map_only(str, lambda v: dev if v == device else v, stuff)
input_, args_, kwargs_ = stuff
# Try running the operation, and return the raised error, if any.
try:
f(input_, *args_, **kwargs_, out=out)
except Exception as e:
return e
eager_err = run(op, device)
meta_err = run(op, "meta")
if eager_err is None and meta_err is not None:
raise RuntimeError(f"eager didn't fail, but meta did.") from meta_err
elif eager_err is not None and meta_err is None:
raise RuntimeError(f"eager failed, but meta didn't.") from eager_err
instantiate_device_type_tests(TestCommon, globals())
if __name__ == "__main__":
run_tests()
```
</details>
### Versions
PyTorch version: 2.5.0a0+git7128504
Is debug build: True
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
cc @ezyang @eellison @bdhirsh | triaged,module: meta tensors | low | Critical |
2,599,499,667 | ui | [bug]: (Module not found: Can't resolve '@/components/ui/input') Next.js Application Fails to Build with Docker but Works Locally and on Vercel | ### Describe the bug
I'm experiencing an issue where my Next.js application builds successfully locally and on Vercel, but fails when attempting to build using Docker. The build process in Docker results in module resolution errors, specifically related to path aliases.
**Build Error Logs:
```> [6/7] RUN npm run build:
0.375
0.375 > future@0.1.0 build
0.375 > next build
0.375
1.008 Attention: Next.js now collects completely anonymous telemetry regarding usage.
1.008 This information is used to shape Next.js' roadmap and prioritize features.
1.008 You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
1.008 https://nextjs.org/telemetry
1.008
1.077 โฒ Next.js 14.2.12
1.077
1.155 Creating an optimized production build ...
13.63 Failed to compile.
13.63
13.63 ./app/auth/login/components/LoginForm.tsx
13.63 Module not found: Can't resolve '@/components/ui/form'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.63 Import trace for requested module:
13.63 ./app/auth/login/page.tsx
13.63
13.63 ./app/auth/login/components/LoginForm.tsx
13.63 Module not found: Can't resolve '@/components/ui/input'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.63 Import trace for requested module:
13.63 ./app/auth/login/page.tsx
13.63
13.63 ./app/auth/login/components/LoginForm.tsx
13.63 Module not found: Can't resolve '@/hooks/use-toast'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.63 Import trace for requested module:
13.63 ./app/auth/login/page.tsx
13.63
13.63 ./app/auth/login/components/LoginForm.tsx
13.63 Module not found: Can't resolve '@/hooks/useAuth'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.63 Import trace for requested module:
13.63 ./app/auth/login/page.tsx
13.63
13.63 ./app/auth/login/page.tsx
13.63 Module not found: Can't resolve '@/hooks/useAuth'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.64
13.64 > Build failed because of webpack errors
------
Dockerfile:20
--------------------
18 |
19 | # Build the Next.js app
20 | >>> RUN npm run build
21 |
22 | # Remove devDependencies to reduce image size
--------------------
ERROR: failed to solve: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1
```
Steps to Reproduce:
Local Build:
Run npm run build locally.
Result: Builds successfully.
Vercel Deployment:
Deploy the application to Vercel.
Result: Deploys and works fine.
Docker Build:
Use the provided Dockerfile to build the Docker image.
Command: docker build -t projectD .
Result: Fails with module resolution errors as shown in the logs.
The Docker build fails during the npm run build step with errors indicating that certain modules cannot be resolved, specifically those using the @ path alias.
**Configuration Files:
1. Next.js Configuration (next.config.js):
```// const NextI18NextConfig = require('./next-i18next.config.js');
/** @type {import('next').NextConfig} */
const path = require('path');
const nextConfig = {
images: { unoptimized: true },
eslint: {
ignoreDuringBuilds: true,
},
};
module.exports = nextConfig;
```
2. Dockerfile:
```# Use an official Node.js runtime as the base image
FROM node:18-alpine
# Set environment variables
ENV NODE_ENV=production
# Set the working directory
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install all dependencies (including devDependencies)
RUN npm install
# Copy the rest of the application code
COPY . .
# Build the Next.js app
RUN npm run build
# Remove devDependencies to reduce image size
RUN npm prune --production
# Expose the port the app runs on
EXPOSE 3000
# Define the command to run the app
CMD ["npm", "start"]
```
### Affected component/components
input
### How to reproduce
Local Build:
Run npm run build locally.
Result: Builds successfully.
Vercel Deployment:
Deploy the application to Vercel.
Result: Deploys and works fine.
Docker Build:
Use the provided Dockerfile to build the Docker image.
Command: docker build -t future .
Result: Fails with module resolution errors as shown in the logs.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
> [6/7] RUN npm run build:
0.375
0.375 > future@0.1.0 build
0.375 > next build
0.375
1.008 Attention: Next.js now collects completely anonymous telemetry regarding usage.
1.008 This information is used to shape Next.js' roadmap and prioritize features.
1.008 You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
1.008 https://nextjs.org/telemetry
1.008
1.077 โฒ Next.js 14.2.12
1.077
1.155 Creating an optimized production build ...
13.63 Failed to compile.
13.63
13.63 ./app/auth/login/components/LoginForm.tsx
13.63 Module not found: Can't resolve '@/components/ui/form'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.63 Import trace for requested module:
13.63 ./app/auth/login/page.tsx
13.63
13.63 ./app/auth/login/components/LoginForm.tsx
13.63 Module not found: Can't resolve '@/components/ui/input'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.63 Import trace for requested module:
13.63 ./app/auth/login/page.tsx
13.63
13.63 ./app/auth/login/components/LoginForm.tsx
13.63 Module not found: Can't resolve '@/hooks/use-toast'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.63 Import trace for requested module:
13.63 ./app/auth/login/page.tsx
13.63
13.63 ./app/auth/login/components/LoginForm.tsx
13.63 Module not found: Can't resolve '@/hooks/useAuth'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.63 Import trace for requested module:
13.63 ./app/auth/login/page.tsx
13.63
13.63 ./app/auth/login/page.tsx
13.63 Module not found: Can't resolve '@/hooks/useAuth'
13.63
13.63 https://nextjs.org/docs/messages/module-not-found
13.63
13.64
13.64 > Build failed because of webpack errors
------
Dockerfile:20
--------------------
18 |
19 | # Build the Next.js app
20 | >>> RUN npm run build
21 |
22 | # Remove devDependencies to reduce image size
--------------------
ERROR: failed to solve: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1
```
### System Info
```bash
docker
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues
I'm seeking help to identify why the Docker build process cannot resolve modules using the @ path alias, despite the application building successfully in other environments. If additional information is required, I can provide further details such as complete project structure, additional configuration files, or logs (I've found exisiting issues but didn't fix my problem)
| bug | low | Critical |
2,599,503,469 | godot | The program interface is not compatible with my device screen | ### Tested versions

### System information
Godot v4.3.stable - Android - Vulkan (Mobile) - integrated PowerVR Rogue GE8320 - (8 Threads)
### Issue description
Items are not showing on the left side of Godot interface, I know I am using the beta version of Godot , But I want to make sure the issue is fixed in the next releases.
My device type: Redmi 9A.

### Steps to reproduce
...
### Minimal reproduction project (MRP)
... | bug,platform:android,topic:editor | low | Minor |
2,599,521,338 | deno | fmt does not preserve CSS property casing | **Input:**
```css
@plugin "daisyui/theme" {
name: "dark";
prefersDark: true;
}
```
**Output:**
```css
@plugin "daisyui/theme" {
name: "dark";
prefersdark: true;
}
```
**Expected output:**
<!-- prettier-ignore -->
```css
@plugin "daisyui/theme" {
name: "dark";
prefersDark: true;
}
```
**Why?**
The upcoming version of DaisyUI (v5) now involve writing configurations in the CSS file. Some property is in camelCase and the casing needs to be preserved, otherwise it wouldn't work. | bug,deno fmt | low | Minor |
2,599,522,473 | ollama | multi-part model+safetensors | How do I run a gguf multi-part model on ollaama?
https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GGUF/blob/main/qwen2.5-7b-instruct-fp16-00004-of-00004.gguf
qwen2.5-7b-instruct-fp16-00001-of-00004.gguf
qwen2.5-7b-instruct-fp16-00002-of-00004.gguf
qwen2.5-7b-instruct-fp16-00003-of-00004.gguf
qwen2.5-7b-instruct-fp16-00004-of-00004.gguf
how to run safetensors model on ollama ? | feature request | low | Minor |
2,599,541,372 | kubernetes | Cannot create UnstructuredExtractor - duplicate entry for /v1, Kind=APIResourceList | ### What happened?
I am trying to create an `UnstructuredExtractor`, with the following code (error handling omitted for brevity):
```go
dynamic, _ := provider.MakeDynamicClient(kubeconfig)
discovery, _ := provider.MakeDiscoveryClient(kubeconfig)
extractor, err := acmetav1.NewUnstructuredExtractor(discovery)
```
After this code `err != nil`, with the following error:
```
failed generating initial GVK Parser: duplicate entry for /v1, Kind=APIResourceList
```
Stepping through with a debugger, the stack trace seems to be:
- In [`NewUnstructuredExtractor`](https://github.com/kubernetes/kubernetes/blob/4f796c02f77fb95d42cd161ea663dd1bf05e372f/staging/src/k8s.io/client-go/applyconfigurations/meta/v1/unstructured.go#L96)
- In [`regenerateGVKParser`](https://github.com/kubernetes/kubernetes/blob/4f796c02f77fb95d42cd161ea663dd1bf05e372f/staging/src/k8s.io/client-go/applyconfigurations/meta/v1/unstructured.go#L69)
- In [`NewGVKParser`](https://github.com/kubernetes/kubernetes/blob/4f796c02f77fb95d42cd161ea663dd1bf05e372f/staging/src/k8s.io/apimachinery/pkg/util/managedfields/gvkparser.go#L73)
### What did you expect to happen?
I expect to get an `UnstructuredExtractor`, that I can use to extract fields from an object that were set by my CLI tool.
### How can we reproduce it (as minimally and precisely as possible)?
This can be reproduced against a `v1.30` or newer (`1.29` is not affected) [kind](https://kind.sigs.k8s.io/) cluster.
```bash
kubectl apply --server-side -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
cat <<EOF | kubectl apply --server-side -f -
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec: {}
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
EOF
```
This causes the duplication of the `APIResourceList` definition.
Then, it is no longer possible to create an `UnstructuredClient` against the cluster using the code above.
### Anything else we need to know?
I suspect the bug is in `client-go`, and that `UnstructuredExtractor` should be constructable even in the presence of duplicate resources. I'm not sure if it is possible for the Calico API server to avoid this duplication, given that it supports both `1.29` and `1.30`.
It's also possible that I'm building the `UnstructuredExtractor` wrong, I wasn't able to find many examples.
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.31.0
```
</details>
### Cloud provider
None
### OS version
N/A (reproduces on kind)
### Install tools
https://kind.sigs.k8s.io/
### Container runtime (CRI) and version (if applicable)
_No response_
### Related plugins (CNI, CSI, ...) and versions (if applicable)
_No response_ | kind/bug,sig/api-machinery,triage/accepted | low | Critical |
2,599,559,869 | pytorch | `dtype` promotion of `out=` functions on meta inputs not consistent. | ### ๐ Describe the bug
List of operations, whose `out=` functions on meta inputs are not consistent when ran with real tensors (e.g. CPU or CUDA). Specifically, I changed the data-type of the output tensor from `float32` to `float64`, and checked whether eager with meta and real tensors behave the same.
According to [the `out=` specification](https://github.com/pytorch/pytorch/wiki/Developer-FAQ#how-does-out-work-in-pytorch), there are some operations that run dtype promotion on the output tensors (given that they are of the same kind), and some that require them to be exactly of the expected dtype. Therefore, using CPU/CUDA inputs as the ground truth, if the behaviors are not the same, it likely means that the meta implementation has a bug.
As an example, `aminmax` decomposition decorated with `type_casts` expects the dtypes to be exact. However, in its decomposition (which is used as a meta implementation), it uses the `@out_wrapper(...)` decorator without specifying `exact_dtype=True`.
**Failed with real inputs, but didn't fail with meta inputs**
- [x] `abs`: #140288
- [ ] `addbmm`
- [x] #138520
- [ ] `addmv`
- [ ] `alias_copy`
- [ ] `all`
- [ ] `amax`
- [ ] `amin`
- [ ] `aminmax`
- [ ] `any`
- [ ] `as_strided_copy`
- [ ] `baddbmm`
- [ ] `bucketize`
- [x] `ceil`: #140288
- [ ] `conj_physical`
- [ ] `cross`
- [ ] `cummax`
- [ ] `cummin`
- [ ] `diag`
- [ ] `diagonal_copy`
- [ ] `dot`
- [ ] `expand_copy`
- [ ] `fft_ihfft2`
- [ ] `fft_ihfftn`
- [x] `floor`: #140288
- [x] `frac`: #140288
- [ ] `frexp`
- [ ] `heaviside`
- [ ] `index_add`
- [ ] `index_copy`
- [ ] `index_select`
- [ ] `isin`
- [x] `isneginf`: #140288
- [x] `isposinf`: #140288
- [ ] `kthvalue`
- [ ] `lerp`
- [ ] `linalg_cross`
- [ ] `linalg_eigh`
- [ ] `linalg_eigvalsh`
- [ ] `linalg_ldl_factor`
- [ ] `linalg_ldl_factor_ex`
- [ ] `linalg_ldl_solve`
- [ ] `linalg_lu`
- [ ] `linalg_lu_factor`
- [ ] `linalg_lu_factor_ex`
- [ ] `linalg_lu_solve`
- [ ] `linalg_matrix_power`
- [ ] `linalg_qr`
- [ ] `linalg_slogdet`
- [ ] `linalg_solve`
- [ ] `linalg_solve_ex`
- [ ] `linalg_solve_triangular`
- [x] #140289
- [ ] `logcumsumexp`
- [ ] `lu_solve`
- [ ] `lu_unpack`
- [ ] `matmul`
- [ ] `max_reduction_no_dim`
- [ ] `min_reduction_no_dim`
- [ ] `mm`
- [ ] `mode`
- [ ] `msort`
- [ ] `multinomial`
- [ ] `mv`
- [ ] `nan_to_num`
- [ ] `narrow_copy`
- [ ] `native_batch_norm`
- [ ] `neg`
- [ ] `nn_functional_avg_pool3d`
- [ ] `nn_functional_gelu`
- [ ] `nn_functional_hardshrink`
- [ ] `nn_functional_linear`
- [ ] `nn_functional_logsigmoid`
- [ ] `nn_functional_softplus`
- [ ] `nn_functional_softshrink`
- [ ] `ormqr`
- [x] #140287
- [ ] `qr`
- [ ] `renorm`
- [ ] `round`
- [ ] `round_decimals_0`
- [ ] `scatter_reduce_amax`
- [ ] `scatter_reduce_amin`
- [ ] `scatter_reduce_mean`
- [ ] `scatter_reduce_prod`
- [ ] `scatter_reduce_sum`
- [ ] `searchsorted`
- [x] `sgn`: #140288
- [x] `sign`: #140288
- [x] `signbit`: #140288
- [ ] `slice_scatter`
- [ ] `softmax`
- [ ] `sort`
- [ ] `sparse_sampled_addmm`
- [x] `square`: #140287
- [ ] `squeeze_copy`
- [ ] `t_copy`
- [ ] `take`
- [ ] `transpose_copy`
- [ ] `tril`
- [x] #140286
- [ ] `triu`
- [x] `trunc`: #140288
- [ ] `unfold_copy`
- [ ] `unsqueeze_copy`
- [ ] `vdot`
- [ ] `view_copy`
- [ ] `where`
**Didn't fail with real inputs, but failed with meta inputs**
Except for `mean` (which is an actual bug), all the other operations present the same behavior as identified by #138396.
- [ ] `geqrf`
- [ ] `mean`
- [ ] `nanmean`
**Dynamic Shape Outputs**
Similar to #138396, this operation outputs tensors of dynamic shape. Thus, there's no way to implement a meta function for it.
- ~`linalg_lstsq`~
## Test Setup
<details>
<summary>OpInfo Test</summary>
```python
import torch
import torch.utils._pytree as pytree
from torch.testing._internal.common_methods_invocations import op_db
from torch.testing._internal.common_device_type import ops, instantiate_device_type_tests, OpDTypes, onlyCUDA, onlyCPU
from torch.testing._internal.common_utils import TestCase, run_tests
class TestCommon(TestCase):
@ops([op for op in op_db if op.supports_out], allowed_dtypes=(torch.float32,))
def test_meta_dtype_error_out(self, device, dtype, op):
samples = list(op.sample_inputs(device, dtype))
for i, sample in enumerate(samples):
torch._dynamo.reset()
input, args, kwargs = (sample.input, sample.args, sample.kwargs)
# Run the functional version of the operation, using eager.
try:
expected = op(input, *args, **kwargs)
if isinstance(expected, tuple):
expected = tuple(expected)
except:
# If that doesn't work out, go to the next sample.
continue
def run(f, dev):
# Create new outputs in the desired device.
out = pytree.tree_map_only(torch.Tensor, lambda t: torch.empty_like(t, device=dev, dtype=torch.float64), expected)
# Move inputs to the desired device
stuff = (input, args, kwargs)
stuff = pytree.tree_map_only(torch.Tensor, lambda t: t.to(dev), stuff)
stuff = pytree.tree_map_only(torch.device, lambda d: torch.device(dev), stuff)
stuff = pytree.tree_map_only(str, lambda v: dev if v == device else v, stuff)
input_, args_, kwargs_ = stuff
# Try running the operation, and return the raised error, if any.
try:
f(input_, *args_, **kwargs_, out=out)
except Exception as e:
return e
eager_err = run(op, device)
meta_err = run(op, "meta")
if eager_err is None and meta_err is not None:
raise RuntimeError(f"eager didn't fail, but meta did.") from meta_err
elif eager_err is not None and meta_err is None:
raise RuntimeError(f"eager failed, but meta didn't.") from eager_err
instantiate_device_type_tests(TestCommon, globals())
if __name__ == "__main__":
run_tests()
```
</details>
### Versions
PyTorch version: 2.5.0a0+git7128504
Is debug build: True
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
cc @nairbv @mruberry @ezyang @eellison @bdhirsh | triaged,module: type promotion,module: meta tensors | low | Critical |
2,599,577,997 | vscode | VS Code unable to recognize any configured extension settings | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
VS Code
Version: 1.94.2
Commit: 384ff7382de624fb94dbaf6da11977bba1ecd427
Date: 2024-10-09T16:08:44.566Z
Electron: 30.5.1
ElectronBuildId: 10262041
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: 12.4.254.20-electron.0
OS: Darwin arm64 23.6.0
OS
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2:-:internal"
HOME_URL="https://amazonlinux.com/"
VARIANT="internal"
None of my extensions are able to load any configuration values and config values in the settings.json are "greyed out" as if VS Code cannot recognize them. This issue has been occurring even after trying different VS Code version.s
Steps to Reproduce:
1. Not sure

| info-needed | low | Critical |
2,599,590,104 | excalidraw | Japanese x Chinese unicode issue | Japanese Kanji characters have a common origin with Chinese Hanzi, but they are not completely the same, as they have slightly different keystrokes. The problem is that they share unicode codepoints (likely unicode simplified too much here), which essentially means that a single font has to prefer one variation over the other. With #8530, most Kanji will be displayed as Hanzi (as Xiaolai is biased towards Chinese), which might be unreadable for some Japanese or even unusable in certain Japanese use cases/domains. See the following example:

Further discussion https://discord.com/channels/723672430744174682/1291770040764465244/1293553830251860030.
To solve this we likely have no other option than to introduce a separate Japanese fallback font with Kanji and "active it" (~push above Xiaolai in the fallback chain) once Japanese variation is preferred/detected.
**Technical challenges**
- Japanese font should be fairly similar to Xiaolai
- could we use Kanji from `Kose font`?
~ the font Japanese variant of Xiaolai, https://github.com/lxgw/kose-font
- How shall we decide when Japanese variation should be activated?
- consider having a scene/workspace toggle (defaulting to `false`) to always prefer Japanese variation
- consider autodetection (toggle on) based on client-side location/ app language? (not bullet-proof, but could work for most cases)
- consider autodetecting Japanese based on Hiragana / Katakana in the workspace > scene > element (with element allowing combining Kanji/Hanzi codepoints in one scene, but likely being more complex and with a small character set for auto-detection)
- Introducing a new Japanese font should not cause a breaking change for existing Japanese diagrams
- consider specifying fallback fonts on the element level
- i.e. `fallbackFamilies: "SetoFont"` or `fallbackFamilies: "SetoFont, Segoe UI Emoji"`
- Both client-side and server-side exports shall receive the flag to include the Japanese fallback font
- could be especially challenging for server-side PNG / PDF in E+, as we would have to have "two sets" of merged fonts (one with just Chinese, other with just Japanese; possibly per each family x weight variation) and decide in runtime which one to use
- that would also lock us into one option per scene, due to the limitations of `Resvg` and `PdfKit` (unless we find a workaround, auto-detection on the element would be worthless) | enhancement,font | low | Minor |
2,599,596,984 | godot | Theme / Theme Overrides Inspectors Inconsistency | ### Tested versions
v4.4.dev3.official [f4af8201b]
### System information
Windows 10 - v4.4.dev3.official [f4af8201b] - Vulcan
### Issue description

Theme Inspector UI has properties aligned in alphabetic order (please see image)
But Theme Overrides UI (Inpspector > Control > Theme Overrides ) has properties aligned in custom order.
It would be much more convenient to sort the properties in both inspectors in the same order
### Steps to reproduce
file:///mnt/data_05tb/Art/Games/theme_overrides_inconsistency/theme_overrides_inconsistency.zip
### Minimal reproduction project (MRP)
Create theme
Create theme overrides | enhancement,topic:editor,usability | low | Minor |
2,599,600,841 | excalidraw | "Copy to clipboard as SVG" fails with `NotAllowedError` in Safari | "Copy to clipboard as SVG" fails with `NotAllowedError` in Safari.
**Steps to reproduce**
1. go to https://excalidraw.com/#json=hfnjGpV1NwRvEs_wfHTmK,wOiYn6lSALmGVH8SmJfXPw
2. trigger "Copy to clipboard as SVG" from the context menu
**Actual result**

```
NotAllowedError: The request is not allowed by the user agent or the platform in the current context, possibly because the user denied
```
**Expected result**
SVG is copied to the clipboard, as it is in Chromium-based browsers. | bug,safari,export | low | Critical |
2,599,611,143 | deno | `@sentry/profiling-node` support problem: `dyld missing symbol called` | Version: Deno 2.0.2
I'm trying to migrate a company codebase to Deno.
I suspect this is a result of https://github.com/getsentry/profiling-node/issues/123. I don't believe NAPI nan is supported in Deno yet.
# Reproduction
Check out this repo https://github.com/lino-levan/sentry-profiling-deno-bug | bug,node native extension | low | Critical |
2,599,653,411 | tauri | [bug] http reqwest | ### Describe the bug
None of the solutions I've found work for my configuration. So I decided to use `https` in localhost. But I run into a problem when declaring my `mkcert` signed certificates.
```xml
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
<domain-config>
<domain includeSubdomains="true">localhost</domain>
<trust-anchors>
<certificates src="@raw/certs" />
</trust-anchors>
</domain-config>
</network-security-config>
```
I copied my `*.pem` certificates into `src-tauri/gen/android/app/src/main/res/raw/certs/`.
Here my error :
```bash
* What went wrong:
Execution failed for task ':app:processX86_64DebugResources'.
> A failure occurred while executing com.android.build.gradle.internal.res.LinkApplicationAndroidResourcesTask$TaskAction
> Android resource linking failed
ERROR: ~/code/project/src-tauri/gen/android/app/src/main/res/xml/network_security_config.xml:6: AAPT: error: resource raw/certs (aka com.project.app:raw/certs) not found.
```
EDIT : my question is ยซ why @raw is not resolve ? ยป and what can I do ?
### Reproduction
- add `tauri_plugin_http` and all config in the `guide` tauri website
- add *f my case* `https://localhost:3000` to `capabilities` config
- generate self-signed certificates with `mkcert`
- follow the official doc https://developer.android.com/privacy-and-security/security-config?hl=fr#trust-anchors
- run `cargo tauri android dev`
### Expected behavior
The android app runs and can make an external reqwwest
### Full `tauri info` output
```text
[โ] Environment
- OS: Ubuntu 24.4.0 x86_64 (X64)
โ webkit2gtk-4.1: 2.44.0
โ rsvg2: 2.58.0
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 20.17.0
- pnpm: 9.10.0
- npm: 10.8.2
[-] Packages
- tauri ๐ฆ: 2.0.4
- tauri-build ๐ฆ: 2.0.1
- wry ๐ฆ: 0.46.1
- tao ๐ฆ: 0.30.3
- tauri-cli ๐ฆ: 2.0.2
[-] Plugins
- tauri-plugin-http ๐ฆ: 2.0.1
- tauri-plugin-fs ๐ฆ: 2.0.1
- tauri-plugin-log ๐ฆ: 2.0.1
[-] App
- build-type: bundle
- CSP: null
- frontendDist: ../dist
- devUrl: http://localhost:1420/
```
### Stack trace
_No response_
### Additional context
This is originally a web app built with `leptos`. So, there is a server and the desktop version can make a request without problem and without `tauri_plugin_http::init` using `invoke` fn. This did not work with `android` version, so I tried to use `https`. | type: bug,status: needs triage | low | Critical |
2,599,662,543 | rust | Macros do not accept empty `vis` at the end of the token tree | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
macro_rules! m {
( ($v:vis) ) => {};
}
fn main() {
m!( () );
}
```
I expected to see this happen: the code compile successfully. [The reference says](https://doc.rust-lang.org/stable/reference/macros-by-example.html#metavariables) `vis` is "a possibly empty [Visibility](https://doc.rust-lang.org/stable/reference/visibility-and-privacy.html) qualifier", so it should match empty.
Instead, this happened: the code is rejected with:
```
error: no rules expected the token `)`
--> src/main.rs:6:10
|
1 | macro_rules! m {
| -------------- when calling this macro
...
6 | m!( () );
| ^ no rules expected this token in macro call
|
note: while trying to match meta-variable `$v:vis`
--> src/main.rs:2:8
|
2 | ( ($v:vis) ) => {};
| ^^^^^^
```
[Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=19bf5dcbe88b8d0b7dc9b8404fc234f5).
@rustbot label +A-macros | A-macros,T-lang,T-compiler,C-bug,WG-macros | low | Critical |
2,599,770,137 | PowerToys | Advanced paste new feature request | ### Description of the new feature / enhancement
Clipboard launches automatically when you copy something.
### Scenario when this would be used?
Sometime, when I do development, I used to find myself copying multiple thing and want to paste each of them in a different place, and I have to enter the command, click on the first one and paste it, and then, redo the command to launch the clipboard again, scroll to find the 6th one and paste it. I would be great if the clipboard remains open and I can paste each thing without having to enter the same command and to scroll to look for what I need.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,599,851,372 | tauri | [bug] A new Channel instance can only transmit once. | ### Describe the bug
Under Linux Gnome,๏ผA new Channel instance can only transmit once.The backend outputs 6 times, while the frontend only outputs 3 times

### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
$ pnpm run tauri info [10:28:13]
> wave@0.1.0 tauri /home/lei/wave
> tauri "info"
[โ] Environment
- OS: Arch Linux Unknown x86_64 (X64)
โ webkit2gtk-4.1: 2.46.1
โ rsvg2: 2.59.1
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (2024-05-07)
โ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 22.9.0
- pnpm: 9.12.1
- bun: 1.1.30
- deno: deno 1.46.3
[-] Packages
- tauri ๐ฆ: 2.0.4
- tauri-build ๐ฆ: 2.0.1
- wry ๐ฆ: 0.46.2
- tao ๐ฆ: 0.30.3
- @tauri-apps/api ๎: 2.0.2
- @tauri-apps/cli ๎: 2.0.3
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.0.1
- @tauri-apps/plugin-shell ๎: 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,599,864,220 | godot | `body shape entered` signal called *multiple times* in the same frame, with the same shape | ### Tested versions
- Reproducible in: v4.3.stable.mono.official [77dcf97d8]
### System information
Windows 10 - Vulkan (forward+) - dedicated
### Issue description
Sometimes the **body_shape_entered** signal is called **multiple times** in the same frame with the same shape.
When contact monitor enabled and max contact >= 2
Does not always happen.... but it is fairly consistent.
**I got a MRP setup that can reproduce this consistently in my machine**
Here's a print of the collision log:

This caught me by surprise.
In my use-case, all was working great with _body_entered_.... and then it all broke when I changed it to _body_shape_entered_.
I was doing some stuff when the collision happened where I extracted its shape and queue_freed the body.
So the second collision call broke the game due to the collision shape not being there anymore.
So this problem doesn't seem to happen with **body_entered**... only with **body_shape_entered**...
And yes, it is the same shape_index/local_shape_index.
And objects were single shapes too.
### Steps to reproduce
- Create a player rigidbody with contact monitor enabled and at least 2 contacts
- Connect body_shape_entered to player
Result: Collisions will occasionally happen multiple times a frame
MRP example:
- Run the MRP. Check prints for multiple collisions on the same object in the same frame.
- Uncomment line 18 and 19 to see it crash instead of only printing logs when it happened
### Minimal reproduction project (MRP)
[MRP_min-shape-twice.zip](https://github.com/user-attachments/files/17448145/MRP_min-shape-twice.zip)
| bug,topic:physics | low | Critical |
2,599,893,582 | godot | Trying to select objects in TopOrthogonal view gives error message in some situations | ### Tested versions
- Reproducible in: v4.3.stable.mono.official [77dcf97d8]
### System information
Windows 10 - Vulkan (forward+) - dedicated
### Issue description
Trying to select objects with the mouse in the editor in the TopOrthogonal view can cause an error message to appear in the console in some situations.
This does not cause any problems. The selection still works....
But it spits this error with each mouse click:
"The target vector and up vector can't be parallel to each other."

Here's a video:
https://github.com/user-attachments/assets/5f27c1bd-17c0-49a9-9bfc-4212c17b35e3
I can consistently reproduce this by adding the sun to the scene.
This is produced by the gizmo calling its intersect_ray to check for selection.
And calling set_look_at with its up direction perpendicular to the position due to the orthogonal view.
Not really sure what this does. I'm guessing it is used to rotate the gizmo's icon so it can check for mouse selection.
https://github.com/godotengine/godot/blob/44fa552343722bb048e2d7c6d3661174a95a8a3c/editor/plugins/node_3d_editor_gizmos.cpp#L655-L659
### Steps to reproduce
- Create a new 3D scene
- Add the the Sun to the scene
- Go to TopOrthogonal view
- Click anywhere (maybe near the origin)
### Minimal reproduction project (MRP)
[MRP_min-shape-twice.zip](https://github.com/user-attachments/files/17448306/MRP_min-shape-twice.zip)
| bug,topic:editor,topic:3d | low | Critical |
2,599,894,062 | rust | `std::sync::OnceLock` is `Eq` but not `Hash` | The standard library type `std::sync::OnceLock` is `Eq` but not `Hash`. From a theoretical and practical perspective, it should probably be both or neither.
The argument for neither is that the result of `Eq` can change because of a different thread as soon as it is returned. Similarly, the value of `Hash` can change between comparison of the `Hash` and a follow up comparison with `Eq`. Both of these are expected short-comings of types that can be changed by other threads.
The argument for both is that `Hash` is fundamentally a property related to equality and not rules out usages in containers where there are some external invariants that guarantee the desired behavior.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-libs-api,C-bug,T-libs | low | Minor |
2,599,902,595 | rust | Regression: geoarrow crate does not compile in release mode on 1.82 | This issue is originally reported by @kylebarron at https://github.com/rust-lang/rust/issues/128887#issuecomment-2423159520
The `geoarrow` crate does not compile in release mode on rust 1.82.0, despite compiling fine in debug mode or in rust 1.81.0.
The code can be obtained by:
```sh
git clone https://github.com/geoarrow/geoarrow-rs
cd geoarrow-rs
git checkout 0b6715c6a56f0115f9078803fae945700713b22f
```
The following commands give compilation errors:
```sh
rustup run 1.82 cargo build --release
rustup run nightly cargo build --release
```
The following commands compile fine without errors:
```sh
rustup run 1.81 cargo build --release
rustup run 1.81 cargo build
rustup run 1.82 cargo build
rustup run nightly cargo build
RUSTFLAGS='-Zinline-mir=no' rustup run nightly cargo build --release
```
Note that this is issue is distinct from the previous 1.80-to-nightly geoarrow regression, which is fixed in #129714 and tested in #129757.
@rustbot labels regression-from-stable-to-stable | P-high,T-compiler,regression-from-stable-to-stable,C-bug,A-mir-opt-inlining,T-types,WG-mir-opt | low | Critical |
2,599,914,726 | pytorch | `_amp_foreach_non_finite_check_and_unscale_` can be torch.compiled inside torch.amp, but not in identical code outside it | ### ๐ Describe the bug
If I torch.compile `torch.amp.GradScaler`, it works. But if I copy paste grad_scaler.py and import GradScaler from there, I receive an error.
To reproduce (testcase taken from [here](https://gist.github.com/mcarilli/bf013d2d2f4b4dd21ade30c9b52d5e2e)):
```
import torch
N, D_in, D_out = 64, 1024, 16
x = torch.randn(N, D_in, device='cuda')
y = torch.randn(N, D_out, device='cuda')
model = torch.nn.Linear(D_in, D_out).cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2)
loss_fn = torch.nn.MSELoss()
from torch.amp import GradScaler
# from gradscaler2 import GradScaler
scaler = GradScaler()
@torch.compile
def run_fwd_bwd():
with torch.amp.autocast('cuda'):
y_pred = model(x)
loss = loss_fn(y_pred, y)
scaler.scale(loss).backward()
scaler.step(optimizer)
optimizer.zero_grad(set_to_none=True)
scaler.update()
for t in range(20):
run_fwd_bwd()
```
The above code will run fine.
If you swap the GradScaler import to:
```
# from torch.amp import GradScaler
from gradscaler2 import GradScaler
```
and copypaste https://raw.githubusercontent.com/pytorch/pytorch/refs/heads/main/torch/amp/grad_scaler.py into the local file `gradscaler2.py`, then it will fail, with the following error:
### Error logs
```
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] Graph break from `Tensor.item()`, consider setting:
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] torch._dynamo.config.capture_scalar_outputs = True
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] or:
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] to include these operations in the captured graph.
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0]
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] Graph break: from user code at:
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] File "/mnt/clusterstorage/workspace/kevin/basic_training.py", line 22, in torch_dynamo_resume_in_run_fwd_bwd_at_21
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] scaler.step(optimizer)
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] File "/mnt/clusterstorage/workspace/kevin/gradscaler2.py", line 457, in step
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs)
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] File "/mnt/clusterstorage/workspace/kevin/gradscaler2.py", line 351, in _maybe_opt_step
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] if not sum(v.item() for v in optimizer_state["found_inf_per_device"].values()):
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] File "/mnt/clusterstorage/workspace/kevin/gradscaler2.py", line 351, in <genexpr>
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0] if not sum(v.item() for v in optimizer_state["found_inf_per_device"].values()):
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0]
W1020 03:27:52.390000 188995 torch/_dynamo/variables/tensor.py:776] [1/0]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 2235, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 623, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 173, in inner
flat_f_outs = f(*flat_f_args)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 859, in functional_call
out = PropagateUnbackedSymInts(mod).run(
File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 146, in run
self.env[node] = self.run_node(node)
File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 5498, in run_node
result = super().run_node(n)
File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 203, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 275, in call_function
return target(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/functional_tensor.py", line 534, in __torch_dispatch__
outs_unwrapped = func._op_dk(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1238, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1692, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1339, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1983, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_impls.py", line 551, in foreach_run_and_map_input_device
fake_mode.fake_tensor_converter.from_meta_and_device(
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 465, in from_meta_and_device
t.device.type == "meta"
AttributeError: 'list' object has no attribute 'device'
While executing %_amp_foreach_non_finite_check_and_unscale_ : [num_users=0] = call_function[target=torch._amp_foreach_non_finite_check_and_unscale_](args = ([%l_optimizer_param_groups_0_params_0_grad, %l_optimizer_param_groups_0_params_1_grad], %retval, %retval_1), kwargs = {})
Original traceback:
File "/mnt/clusterstorage/workspace/kevin/gradscaler2.py", line 451, in step
self.unscale_(optimizer)
File "/mnt/clusterstorage/workspace/kevin/gradscaler2.py", line 338, in unscale_
optimizer_state["found_inf_per_device"] = self._unscale_grads_(
File "/mnt/clusterstorage/workspace/kevin/gradscaler2.py", line 279, in _unscale_grads_
torch._amp_foreach_non_finite_check_and_unscale_(
```
### Minified repro
```
from math import inf
import torch
from torch import tensor, device
import torch.fx as fx
import torch._dynamo
from torch._dynamo.testing import rand_strided
from torch._dynamo.debug_utils import run_fwd_maybe_bwd
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
import torch.fx.experimental._config
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, L_L_optimizer_param_groups_0_params_0_grad_ : torch.Tensor, L_L_optimizer_param_groups_0_params_1_grad_ : torch.Tensor, retval, retval_1):
l_l_optimizer_param_groups_0_params_0_grad_ = L_L_optimizer_param_groups_0_params_0_grad_
l_l_optimizer_param_groups_0_params_1_grad_ = L_L_optimizer_param_groups_0_params_1_grad_
_set_grad_enabled = torch._C._set_grad_enabled(False); _set_grad_enabled = None
_amp_foreach_non_finite_check_and_unscale_ = torch._amp_foreach_non_finite_check_and_unscale_([l_l_optimizer_param_groups_0_params_0_grad_, l_l_optimizer_param_groups_0_params_1_grad_], retval, retval_1); l_l_optimizer_param_groups_0_params_0_grad_ = l_l_optimizer_param_groups_0_params_1_grad_ = retval = retval_1 = None
return (_amp_foreach_non_finite_check_and_unscale_,)
mod = Repro()
def load_args(reader):
buf0 = reader.storage('db1318cb970abdd196e5b690171477cca3ad8647', 65536, device=device(type='cuda', index=0))
reader.tensor(buf0, (16, 1024), is_leaf=True) # L_L_optimizer_param_groups_0_params_0_grad_
buf1 = reader.storage('7c518087601bc171d0842474bb14ee7425812ab7', 64, device=device(type='cuda', index=0))
reader.tensor(buf1, (16,), is_leaf=True) # L_L_optimizer_param_groups_0_params_1_grad_
buf2 = reader.storage('9069ca78e7450a285173431b3e52c5c25299e473', 4, device=device(type='cuda', index=0))
reader.tensor(buf2, (), is_leaf=True) # retval
buf3 = reader.storage('042d080d32daa72198e939a275e3d89a10eb9ec1', 4, device=device(type='cuda', index=0))
reader.tensor(buf3, (), is_leaf=True) # retval_1
load_args._version = 0
if __name__ == '__main__':
from torch._dynamo.repro.after_dynamo import run_repro
run_repro(mod, load_args, accuracy=False, command='run',
save_dir='/mnt/clusterstorage/workspace/kevin/checkpoints', autocast=False, backend='inductor')
```
### Versions
```
PyTorch version: 2.5.0-rc10
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.13
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.5.0rc10
[pip3] torchaudio==2.5.0rc4
[pip3] torchdiffeq==0.2.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0rc6
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
My final goal is to run `_amp_foreach_non_finite_check_and_unscale_` inside my own torch.compiled code.
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | good first issue,triaged,oncall: pt2,module: dynamo | low | Critical |
2,599,937,537 | ui | [bug]: CLI `add` command breaks on comments in `tailwind.config.ts` | ### Describe the bug

```
fontFamily: {
sans: [
// "var(--font-sans)",
"var(--font-geist-sans)",
...fontFamily.sans,
],
serif: ["var(--font-serif)", ...fontFamily.serif],
},
```
Became
```
fontFamily: {
sans: [\\n // "var(--font-sans)",\\n "var(--font-geist-sans)",\\n ...fontFamily.sans,\\n ],\n serif: ["var(--font-serif)", ...fontFamily.serif]
},
```
### Affected component/components
CLI
### How to reproduce
Install something with a comment at the top of the fontfamily in tailwind config:
```
npx shadcn@latest add sidebar-01 sidebar-02 sidebar-03 sidebar-04 sidebar-05 sidebar-06 sidebar-07 sidebar-08 sidebar-09 sidebar-10 sidebar-11 sidebar-12 sidebar-13 sidebar-14 sidebar-15
```
https://github.com/lacymorrow/juicy-stack/pull/2/files#diff-655dc9e3d0aa561e3fa164bf48bd89cb0f5da65e0a567f8ebbf9dd791a0e7f40
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Latest CLI
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,600,013,644 | svelte | Duplicate text on upcoming svelte omni site. | ### Describe the bug
This text is duplicated on https://svelte-omnisite.vercel.app/docs/svelte/lifecycle-hooks
>

### Reproduction
Go to https://svelte-omnisite.vercel.app/docs/svelte/lifecycle-hooks#onDestroy and read the docs.
### Logs
_No response_
### System Info
```shell
Not relevant
```
### Severity
annoyance | documentation | low | Critical |
2,600,018,961 | neovim | `:Man bash(1)` loads incorrect/old version of bash man page. | ### Problem
On my machine MacOS machine,
### Using the command defaults
1. running `man bash` in the terminal, BOTH outside neovim and inside neovim's builtin terminal, loads man pages for bash v5.2.
The man page can be found at `man /usr/local/share/man/man1/bash.1`.
BUT (This is the bug)
2. Running `:Man bash(1)` command in neovim loads the manpage for the older bash version v3.2, which comes builtin on MacOS machines and can be found at `man /usr/share/man/man1/bash.1` (*).
---
### Manually loading the bash man page with `:Man /path/to/man/page`:
If I try to load v3.2 manually `:Man /usr/share/man/man1/bash.1`, then it sais "No manual entry found for ..." but I know it exists because it can be loaded manually in the terminal with (*).
AND trying to load v5.2 path manually literally loads v3.2, so it seems that it is impossible to even load version 5.2 with `:Man`.
---
This behavior manifests in both my full config and also with `nvim --clean`. I asked on matrix and we agreed that since this shows under both the configured and clean it makes sense to post as a bug. I looked at the lua code for `:Man` and conclude that this could maybe be my first patch for neovim but I want to first make sure that this really is a bug and not a feature.
### Steps to reproduce
Just install neovim on macos it seems.
### Expected behavior
I am expecting man page for bash 5.2 to be loaded with `:Man bash` but I am not sure.
### Nvim version (nvim -v)
NVIM v0.11.0-dev-608+g9d74dc3ac AND nvim 0.10
### Vim (not Nvim) behaves the same?
`:Man` is not an editor command in vim
### Operating system/version
Macos 15.0
### Terminal name/version
alacritty 0.13.2 (bb8ea18)
### $TERM environment variable
xterm-256color
### Installation
bob-nvim for latest, and brew for nvim 0.10 | bug,plugin,runtime,complexity:low | low | Critical |
2,600,053,324 | vscode | Cursor always stick to the end of line in Notebooks | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
I'm having a particular weird issue where I'm unable to select specific parts of a line inside a Jupyter notebook cell and instead my cursor is placed at the end of the lines, unless I modify it.
I attach a video describing the situation, note that I am clicking any point within the line but the cursor is always sticking to the rightmost side of it (for most lines). I cannot select a specific part of the lines and double clicking will select the whole line instead.
https://github.com/user-attachments/assets/1baaa8ed-6b82-4e0b-b0da-7e579ee4f985
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.2
- OS Version: Windows 11 NT 10.0; Win64; x64
Steps to Reproduce:
1. Have a Jupyter notebook opened, in my case I'm using it through WSL.
I'm still not quite sure what causes it, because it often happens when reloading/opening the notebook from disk but sometimes it appears to be triggered when cells are unrendered from view.
2. Click on any recently unmodified line, the cursor should jump to the end
Console log doesn't show anything and/or maybe I haven't checked on every log.
From what I know, disabling extensions (except WSL) doesn't fix the issue and it can still be triggered when selecting a line in another cell.
| info-needed | low | Critical |
2,600,053,768 | tensorflow | tflite-support build is failing for elinux_aarch6 | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tflite-support 0.4.4
### Custom code
No
### OS platform and distribution
Ubuntu 22.04 Arm
### Mobile device
NXP i.mx8 plus
### Python version
3.10.12
### Bazel version
5.1.1
### GCC/compiler version
11.4.0
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
pip install tflite-support on the NXP i.mx8 plus board gets a very old version of the package (0.1.0)
I need the 0.4.4 wheel built for arm64, to make use of TextEmbedder.
I expect the bazel build to succeed on the Ubuntu 22.04 arm64 based machine that I have (AWS T4G EC2 instance) from where I plan to take the wheel and deploy to the NXP device.
The build is failing.
### Standalone code to reproduce the issue
```shell
1. Obtain the source code from https://github.com/tensorflow/tflite-support/archive/refs/tags/v0.4.4.tar.gz
2. Install bazel using https://github.com/bazelbuild/bazelisk/releases/download/v1.22.0/bazelisk-linux-arm64
3. Modify tensorflow_lite_support/tools/pip_package/rpi/build_arm_pip_package.sh to remove build for elinux_armhf, as I need only the elinux_aarch64 to be built.
4. Run tensorflow_lite_support/tools/pip_package/rpi/build_arm_pip_package.sh
It results in the errors show in in the log output. Same issue occurs when building from nightly as well as 0.4.3.
```
### Relevant log output
```shell
ERROR: /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/com_google_absl/absl/types/BUILD.bazel:154:11: Compiling absl/types/bad_optional_access.cc failed: (Exit 2): aarch64-none-linux-gnu-gcc failed: error executing command
(cd /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/execroot/org_tensorflow_lite_support && \
exec env - \
PATH=/home/ubuntu/.cache/bazelisk/downloads/sha256/a590a28608772e779efc0c29bb678cd2a150deb27a9f8c557cc1d2b131a779ef/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/ubuntu/bazelisk \
PWD=/proc/self/cwd \
TF2_BEHAVIOR=1 \
/home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc -fstack-protector -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/lib/gcc/aarch64-none-linux-gnu/11.3.1/include -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/lib/gcc/aarch64-none-linux-gnu/11.3.1/include-fixed -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/aarch64-none-linux-gnu/include/c++/11.3.1/ -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/aarch64-none-linux-gnu/libc/usr/include/ -isystem /usr/include/python3.5 -isystem /usr/include/ -MD -MF bazel-out/aarch64-opt/bin/external/com_google_absl/absl/types/_objs/bad_optional_access/bad_optional_access.pic.d '-frandom-seed=bazel-out/aarch64-opt/bin/external/com_google_absl/absl/types/_objs/bad_optional_access/bad_optional_access.pic.o' -fPIC -iquote external/com_google_absl -iquote bazel-out/aarch64-opt/bin/external/com_google_absl -w '-std=c++17' -Wall -Wextra -Wcast-qual -Wconversion-null -Wformat-security -Wmissing-declarations -Woverlength-strings -Wpointer-arith -Wundef -Wunused-local-typedefs -Wunused-result -Wvarargs -Wvla -Wwrite-strings -DNOMINMAX -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -no-canonical-prefixes -fno-canonical-system-headers -c external/com_google_absl/absl/types/bad_optional_access.cc -o bazel-out/aarch64-opt/bin/external/com_google_absl/absl/types/_objs/bad_optional_access/bad_optional_access.pic.o)
# Configuration: 2e794a98601ad29846b443b77992a53d92a5a762ca0ee677f9a8aca3a1760abb
# Execution platform: @local_execution_config_platform//:platform
/home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc: 1: Syntax error: Unterminated quoted string
ERROR: /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/com_google_absl/absl/strings/BUILD.bazel:30:11: Compiling absl/strings/escaping.cc failed: (Exit 2): aarch64-none-linux-gnu-gcc failed: error executing command
(cd /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/execroot/org_tensorflow_lite_support && \
exec env - \
PATH=/home/ubuntu/.cache/bazelisk/downloads/sha256/a590a28608772e779efc0c29bb678cd2a150deb27a9f8c557cc1d2b131a779ef/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/ubuntu/bazelisk \
PWD=/proc/self/cwd \
TF2_BEHAVIOR=1 \
/home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc -fstack-protector -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/lib/gcc/aarch64-none-linux-gnu/11.3.1/include -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/lib/gcc/aarch64-none-linux-gnu/11.3.1/include-fixed -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/aarch64-none-linux-gnu/include/c++/11.3.1/ -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/aarch64-none-linux-gnu/libc/usr/include/ -isystem /usr/include/python3.5 -isystem /usr/include/ -MD -MF bazel-out/aarch64-opt/bin/external/com_google_absl/absl/strings/_objs/strings/escaping.pic.d '-frandom-seed=bazel-out/aarch64-opt/bin/external/com_google_absl/absl/strings/_objs/strings/escaping.pic.o' -fPIC -iquote external/com_google_absl -iquote bazel-out/aarch64-opt/bin/external/com_google_absl -w '-std=c++17' -Wall -Wextra -Wcast-qual -Wconversion-null -Wformat-security -Wmissing-declarations -Woverlength-strings -Wpointer-arith -Wundef -Wunused-local-typedefs -Wunused-result -Wvarargs -Wvla -Wwrite-strings -DNOMINMAX -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -no-canonical-prefixes -fno-canonical-system-headers -c external/com_google_absl/absl/strings/escaping.cc -o bazel-out/aarch64-opt/bin/external/com_google_absl/absl/strings/_objs/strings/escaping.pic.o)
# Configuration: 2e794a98601ad29846b443b77992a53d92a5a762ca0ee677f9a8aca3a1760abb
# Execution platform: @local_execution_config_platform//:platform
/home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc: 1: Syntax error: Unterminated quoted string
ERROR: /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/com_google_absl/absl/strings/BUILD.bazel:30:11: Compiling absl/strings/internal/memutil.cc failed: (Exit 2): aarch64-none-linux-gnu-gcc failed: error executing command
(cd /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/execroot/org_tensorflow_lite_support && \
exec env - \
PATH=/home/ubuntu/.cache/bazelisk/downloads/sha256/a590a28608772e779efc0c29bb678cd2a150deb27a9f8c557cc1d2b131a779ef/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/ubuntu/bazelisk \
PWD=/proc/self/cwd \
TF2_BEHAVIOR=1 \
/home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc -fstack-protector -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/lib/gcc/aarch64-none-linux-gnu/11.3.1/include -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/lib/gcc/aarch64-none-linux-gnu/11.3.1/include-fixed -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/aarch64-none-linux-gnu/include/c++/11.3.1/ -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/aarch64-none-linux-gnu/libc/usr/include/ -isystem /usr/include/python3.5 -isystem /usr/include/ -MD -MF bazel-out/aarch64-opt/bin/external/com_google_absl/absl/strings/_objs/strings/memutil.pic.d '-frandom-seed=bazel-out/aarch64-opt/bin/external/com_google_absl/absl/strings/_objs/strings/memutil.pic.o' -fPIC -iquote external/com_google_absl -iquote bazel-out/aarch64-opt/bin/external/com_google_absl -w '-std=c++17' -Wall -Wextra -Wcast-qual -Wconversion-null -Wformat-security -Wmissing-declarations -Woverlength-strings -Wpointer-arith -Wundef -Wunused-local-typedefs -Wunused-result -Wvarargs -Wvla -Wwrite-strings -DNOMINMAX -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -no-canonical-prefixes -fno-canonical-system-headers -c external/com_google_absl/absl/strings/internal/memutil.cc -o bazel-out/aarch64-opt/bin/external/com_google_absl/absl/strings/_objs/strings/memutil.pic.o)
# Configuration: 2e794a98601ad29846b443b77992a53d92a5a762ca0ee677f9a8aca3a1760abb
# Execution platform: @local_execution_config_platform//:platform
/home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc: 1: Syntax error: Unterminated quoted string
ERROR: /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/com_google_absl/absl/strings/BUILD.bazel:30:11: Compiling absl/strings/string_view.cc failed: (Exit 2): aarch64-none-linux-gnu-gcc failed: error executing command
(cd /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/execroot/org_tensorflow_lite_support && \
exec env - \
PATH=/home/ubuntu/.cache/bazelisk/downloads/sha256/a590a28608772e779efc0c29bb678cd2a150deb27a9f8c557cc1d2b131a779ef/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/ubuntu/bazelisk \
PWD=/proc/self/cwd \
TF2_BEHAVIOR=1 \
/home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc -fstack-protector -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/lib/gcc/aarch64-none-linux-gnu/11.3.1/include -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/lib/gcc/aarch64-none-linux-gnu/11.3.1/include-fixed -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/aarch64-none-linux-gnu/include/c++/11.3.1/ -isystem /home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/aarch64-none-linux-gnu/libc/usr/include/ -isystem /usr/include/python3.5 -isystem /usr/include/ -MD -MF bazel-out/aarch64-opt/bin/external/com_google_absl/absl/strings/_objs/strings/string_view.pic.d '-frandom-seed=bazel-out/aarch64-opt/bin/external/com_google_absl/absl/strings/_objs/strings/string_view.pic.o' -fPIC -iquote external/com_google_absl -iquote bazel-out/aarch64-opt/bin/external/com_google_absl -w '-std=c++17' -Wall -Wextra -Wcast-qual -Wconversion-null -Wformat-security -Wmissing-declarations -Woverlength-strings -Wpointer-arith -Wundef -Wunused-local-typedefs -Wunused-result -Wvarargs -Wvla -Wwrite-strings -DNOMINMAX -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -no-canonical-prefixes -fno-canonical-system-headers -c external/com_google_absl/absl/strings/string_view.cc -o bazel-out/aarch64-opt/bin/external/com_google_absl/absl/strings/_objs/strings/string_view.pic.o)
# Configuration: 2e794a98601ad29846b443b77992a53d92a5a762ca0ee677f9a8aca3a1760abb
# Execution platform: @local_execution_config_platform//:platform
/home/ubuntu/.cache/bazel/_bazel_ubuntu/54ed875be5e8bbb87133512fb093e2b6/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc: 1: Syntax error: Unterminated quoted string
Target //tensorflow_lite_support/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 31.453s, Critical Path: 0.47s
INFO: 33 processes: 30 internal, 3 local.
FAILED: Build did NOT complete successfully
```
| stat:awaiting tensorflower,type:build/install,comp:lite,subtype: ubuntu/linux | low | Critical |
2,600,065,716 | deno | `shadcn-svelte` hangs without `-A` | Originally reported on Discord:
https://discord.com/channels/684898665143206084/684898665151594506/1297443793028780034

| bug,permissions,node compat | low | Minor |
2,600,160,989 | svelte | feature: dev environment nested signal indicator | ### Describe the problem
Tracking nested signals, and therefore figuring out which functions may be reactive or not, is surprisingly not as big as an issue as I originally thought it was before I started mucking about in Svelte 5. Nevertheless, it would be great to be able to know what functions might reference signals.
### Describe the proposed solution
It might be possible and awesome to do something similar to [does-it-throw](https://github.com/michaelangeloio/does-it-throw/tree/main) except for signals. A function could be marked as 'signally' if it refers to a signal or another 'signally' function.
This means to use the IDE to make a best guess on functions to see whether they are or are not reactive. Obviously being that signals are runtime animals, there will always be a grey area where this is impossible to predict, but if it could cover at least the cases where a function definitely is or definitely isn't reactive, that would be pretty cool!
```javascript
let favouriteAnimal = $state("cat")
function isAnimalBest() { // <-- Definitely reactive, some kind of icon in the margin or squiggle?
return favouriteAnimal == "goat" ? "๐" : "๐ซ"
}
```
### Importance
nice to have | feature request | low | Minor |
2,600,168,332 | deno | NestJS cannot resolve ModuleRef dependency after TerminusModule is added | ## Problem
- NestJS cannot resolve ModuleRef dependency after TerminusModule is added
- ./src/app.module.ts
```
import { Module } from '@nestjs/common';
import { TerminusModule } from '@nestjs/terminus';
@Module({
imports: [TerminusModule]
})
export class AppModule {}
```
## How To Reproduce
1. Clone the minimal reproducible project example
```
git clone https://github.com/lucassusanto/nestjs-deno-terminus-issue
```
2. Install the dependencies
```
deno install
```
3. Run the application
```
deno run --allow-env --allow-net --allow-read src/main.ts
```
4. NestJS won't start
```
[Nest] 13408 - 10/20/2024, 3:06:53โฏPM ERROR [ExceptionHandler] Nest can't resolve dependencies of the TypeOrmHealthIndicator (?). Please make sure that the argument ModuleRef at index [0] is available in the TerminusModule context.
Potential solutions:
- Is TerminusModule a valid NestJS module?
- If ModuleRef is a provider, is it part of the current TerminusModule?
- If ModuleRef is exported from a separate @Module, is that module imported within TerminusModule?
@Module({
imports: [ /* the Module containing ModuleRef */ ]
})
```
## Expected Result
- NestJS should run without error
```
[Nest] 13531 - 10/20/2024, 3:07:54โฏPM LOG [NestApplication] Nest application successfully started +7ms
```
## Additional Context
- Deno version
```
deno 2.0.2 (stable, release, x86_64-unknown-linux-gnu)
v8 12.9.202.13-rusty
typescript 5.6.2
```
| bug,node compat | low | Critical |
2,600,170,419 | opencv | Open CV cv::resize memory leak. (4.10.0) | ### System Information
OpenCV version: 4.10.0
Operating System / Platform: Ubuntu 20.04
Compiler & compiler version: g++ (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Java Version: openjdk 21.0.4 2024-07-16 LTS
### Detailed description
```c++
#include <opencv2/opencv.hpp>
#include <sys/resource.h>
#include <cstdio>
void print_memory_usage() {
struct rusage usage;
getrusage(RUSAGE_SELF, &usage);
printf("Memory usage: %ld KB\n", usage.ru_maxrss);
}
extern "C" {
void image_resize(const char* input_path, const char* output_path, int new_width, int new_height) {
// Read the image from file
printf("OpenCV version: %d.%d.%d\n", CV_VERSION_MAJOR, CV_VERSION_MINOR, CV_VERSION_REVISION);
cv::Mat input_image = cv::imread(input_path);
if (input_image.empty()) {
printf("Could not read the image: %s\n", input_path);
return;
}
// Create an empty matrix to store the resized image
cv::Mat resized_image;
// Resize the image
cv::resize(input_image, resized_image, cv::Size(new_width, new_height));
// Write the resized image to file
cv::imwrite(output_path, resized_image);
resized_image.release();
input_image.release();
print_memory_usage();
}
}
```
I have a java application where I am loading this library and performing infinite resize operation on a png image, but I notice that after some ~1 Million resize operation my container memory is going to ~99% and eventually pod is being crashed. However JVM heap memory is never going beyond 512M.

Initially I found this bug as part of JavaCv library https://github.com/bytedeco/javacv/issues/2283
I am seeing memory leak in both javacv library as well the native library execution above. And the root cause seems to be opencv library itself.
### Steps to reproduce
```cpp
#include <opencv2/opencv.hpp>
#include <sys/resource.h>
#include <cstdio>
void print_memory_usage() {
struct rusage usage;
getrusage(RUSAGE_SELF, &usage);
printf("Memory usage: %ld KB\n", usage.ru_maxrss);
}
extern "C" {
void image_resize(const char* input_path, const char* output_path, int new_width, int new_height) {
// Read the image from file
printf("OpenCV version: %d.%d.%d\n", CV_VERSION_MAJOR, CV_VERSION_MINOR, CV_VERSION_REVISION);
cv::Mat input_image = cv::imread(input_path);
if (input_image.empty()) {
printf("Could not read the image: %s\n", input_path);
return;
}
// Create an empty matrix to store the resized image
cv::Mat resized_image;
// Resize the image
cv::resize(input_image, resized_image, cv::Size(new_width, new_height));
// Write the resized image to file
cv::imwrite(output_path, resized_image);
resized_image.release();
input_image.release();
print_memory_usage();
}
}
```

```sh
g++ -shared -o libs/libimage_resize.so -fPIC image_resize.cpp -I/usr/local/include/opencv4 -L/usr/local/lib -lopencv_core -lopencv_imgproc -lopencv_imgcodecs
```
```java
package org.example;
// Java 22 Panama Code to Call the C++ Image Resize Function
import java.io.IOException;
import java.lang.foreign.Arena;
import java.lang.foreign.FunctionDescriptor;
import java.lang.foreign.Linker;
import java.lang.foreign.MemorySegment;
import java.lang.foreign.SymbolLookup;
import java.lang.foreign.ValueLayout;
import java.lang.invoke.MethodHandle;
import java.lang.invoke.MethodHandles;
import java.lang.invoke.MethodType;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Random;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
public class panama {
public void resizeImage() throws Throwable {
// Lookup the resize_image function from the shared library
long start = System.currentTimeMillis();
Linker linker = Linker.nativeLinker();
SymbolLookup lookup = SymbolLookup.loaderLookup();
Path outputMediaPath = Files.createTempFile("test", ".png");
MethodHandle resizeHandle = linker.downcallHandle(
lookup.find("image_resize").orElseThrow(),
FunctionDescriptor.ofVoid(
ValueLayout.ADDRESS, // input_path (const char*)
ValueLayout.ADDRESS, // output_path (const char*)
ValueLayout.JAVA_INT, // new_width (int)
ValueLayout.JAVA_INT // new_height (int)
)
);
try (Arena arena = Arena.ofConfined()) {
MemorySegment inputPathSegment = arena.allocateUtf8String("media.png");
MemorySegment outputPathSegment = arena.allocateUtf8String(String.valueOf(outputMediaPath));
Integer[] h = {300, 450};
Integer[] w = {400, 500};
Random random = new Random();
Integer newWidth = h[random.nextInt(2)];
Integer newHeight = w[random.nextInt(2)];
// Call the native function using Panama's MethodHandle
resizeHandle.invoke(
inputPathSegment,
outputPathSegment,
newWidth,
newHeight
);
Files.deleteIfExists(outputMediaPath);
long end = System.currentTimeMillis();
System.out.println("time: "+(end-start));
}
}
public static void main(String[] args) {
System.out.println(System.getProperty("java.library.path"));
System.load("/home/azureuser/opencv/opencv-so/libs/libimage_resize.so");
System.out.println("loaded");
int i = 0;
int j=0;
AtomicLong count = new AtomicLong();
ExecutorService executor = new ThreadPoolExecutor(
15, // Core pool size (number of threads)
20, // Maximum pool size (limit number of threads)
60L, TimeUnit.SECONDS, // Time to keep idle threads alive
new ArrayBlockingQueue<>(1000), // Bounded task queue of size 500
new ThreadPoolExecutor.CallerRunsPolicy() // Handler when the queue is full
);// Only allow 1000 tasks to be
panama app = new panama();
for(i=0; i < 10000 ; i++) {
for(j=0; j<10000; j++) {
executor.submit(() -> {
try {
app.resizeImage();
} catch (Throwable e) {
throw new RuntimeException(e);
}
// Calculate used memory
Runtime runtime = Runtime.getRuntime();
long usedMemory = (runtime.totalMemory() - runtime.freeMemory()) /(1024*1024);
// Total memory currently available to the JVM (committed memory)
long committedMemory = runtime.totalMemory() / (1024 * 1024);
// Maximum memory the JVM can use (based on the -Xmx setting)
long maxMemory = runtime.maxMemory() / (1024* 1024);
System.out.println("Used: "+usedMemory+", Commited: "+ committedMemory+", Max: "+ maxMemory);
});
}
}
executor.shutdown();
}
}
```
```Dockerfile
FROM eclipse-temurin:21
# Switch to root user for installing libraries
USER root
RUN mkdir -p /path/to/image/folder
RUN mkdir -p /home/azureuser/opencv/opencv-so/libs/
COPY libimage_resize.so /home/azureuser/opencv/opencv-so/libs/
COPY media.png /home/azureuser/opencv/opencv-so/libs/media.png
# Create necessary directories and install dependencies in a single step
# libgtk required for openCV library.
RUN mkdir -p /opt/app /appl/media \
&& apt-get update -y \
&& apt install libjemalloc-dev -y \
&& apt-get install libopencv-dev -y \
&& apt-get install -y libgtk2.0-0 \
&& apt install valgrind -y
RUN apt install build-essential cmake wget unzip git -y
WORKDIR /opencv-build
RUN git clone https://github.com/opencv/opencv.git
WORKDIR /opencv-build/opencv/build
RUN cmake ../
RUN make
RUN make install
COPY startup.sh /opt/app/startup.sh
RUN chmod +x /opt/app/startup.sh
COPY app.jar /opt/app/app-jcv-load.jar
EXPOSE 8080
# Set the working directory
WORKDIR /opt/app
# Command to run the startup script
CMD ["./startup.sh"]
```
* Startup.sh
```sh
#!/bin/sh
#export MALLOC_CONF="prof:true,prof_leak:true,lg_prof_interval:30,lg_prof_sample:17,prof_prefix:/opt/app/prof/"
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
java -Xms512M -Xmx1024M \
#-Dorg.bytedeco.javacpp.maxBytes=1000M \
#-Dorg.bytedeco.javacpp.maxPhysicalBytes=2000M \
#-Dorg.bytedeco.javacpp.nopointergc=true \
--enable-preview -jar app-jcv-load.jar
```

### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: imgproc,incomplete,needs investigation | low | Critical |
2,600,171,123 | ui | [bug]: Mobile sidebar doesn't play nice with Clerk popup | ### Describe the bug
<img width="427" alt="image" src="https://github.com/user-attachments/assets/77ae160b-3bde-46cc-a8d3-0a20589e7edf">
I place the Clerk signed-in user button on my sidebar, and this button is impossible to click on any screen where the sidebar is an overlay.

This appears to be because the sidebar's interaction area overrides the clerk button.
```
<Sidebar variant="floating" collapsible="offcanvas">
<SidebarHeader>
<div className="flex justify-between items-center p-4">
Workspace
<SignedIn>
<div className="relative z-1">
<UserButton
appearance={{
baseTheme: darkMode ? dark : undefined,
}}
/>
</div>
</SignedIn>
</div>
</SidebarHeader>
```
I've tried using tailwind to adjust the relative z but no luck so far.
### Affected component/components
Sidebar
### How to reproduce
1. Add Sidebar
2. Add Clerk signed in button to sidebar.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Chrome 129
macOS 15
Vite 5.4.2
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,600,182,164 | transformers | Access to model outputs inside LogitProcessor | ### Feature request

The LogitProcessor __call__ method currently has access to the input_ids and the logits generated. It would be really helpful if it has access to the model output for that iteration.
### Motivation
I am trying to implement something similar to [Get To The Point: Summarization with Pointer-Generator Networks](https://arxiv.org/abs/1704.04368) along with transformers, especially Llama models. I am required to add the attention output to the model's generated logit for pointing mechanism, but I do not have access to the attention value inside logit processor. As seen in the picture, if the standard model output object can be passes to the logit processor, then the user can extract required details from the output object and use it to further process the generated logits.

### Your contribution
I can implement the required changed and create a PR if allowed. | Feature request,Generation | low | Minor |
2,600,184,050 | godot | ENet server memory leak when clients disconnect abruptly | ### Tested versions
Reproducible in Godot 4.4 dev3, Godot 4.3 stable
### System information
Godot v4.4.dev3 - Windows 10.0.19045 - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 3060 (NVIDIA; 32.0.15.6109) - 13th Gen Intel(R) Core(TM) i9-13900K (32 threads)
### Issue description
I am building a server app for my multiplayer game using ENet Connection with DTLS, and I think I found a memory leak: when clients are closed suddenly with kill process, they disconnect on the server side but without freeing the used memory.
In addition, after the Server app reaches ~500MB of leaked memory, I observed a noticeble CPU usage, even when there are apparently no clients connected.
The attached MRP was tested on 3 PCs: locally, over LAN, and over Internet, in both debug and release builds.
### Steps to reproduce
Testing with the attached MRP:
1. Start the Client and Server apps.
2. Press the button in the Client to add 300 clients. This will add ~75MB to the Server app memory.
3. Close the Client window normally (from X button or Alt+F4). The Server app memory will decrease and the clients will disconnect.
4. Repeat 1 & 2, but this time close the Client app by killing its process from Task Manager (or using the Stop button in the editor). The clients on the Server app will disconnect as usual, but the memory will not decrease.
### Minimal reproduction project (MRP)
[ENet Client Server.zip](https://github.com/user-attachments/files/17449578/ENet.Client.Server.zip)
| bug,topic:network | low | Critical |
2,600,188,744 | ui | [feat]: unable to verify the first certificate | ### Feature description

### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,600,191,007 | go | x/crypto/ssh: Marshal silently ignores fields with unsupported types | ### Go version
go version go1.23.1 linux/amd64
### Output of `go env` in your module/workspace:
```shell
$ go env
GO111MODULE='on'
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/daniil_stepanenko/.cache/go-build'
GOENV='/home/daniil_stepanenko/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/daniil_stepanenko/go/pkg/mod'
GOOS='linux'
GOPATH='/home/daniil_stepanenko/go'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/daniil_stepanenko/go/go1.23.1'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/daniil_stepanenko/go/go1.23.1/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/daniil_stepanenko/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/daniil_stepanenko/work/upstream/gomplate/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2364282043=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
https://go.dev/play/p/D6fgIrq7PpV
### What did you see happen?
Marshal succeeds, but its result cannot be used. Unmarshal returns an error.
```
prog.go:24: ssh: unmarshal error for field E of type PublicKey: unsupported type: int
prog.go:25: ssh: short read
prog.go:36: ssh: unmarshal error for field E of type PublicKey: unsupported type: int
prog.go:37: ssh: short read
```
### What did you expect to see?
ssh.Marshal+ssh.Unmarshal work successfully with rsa.PublicKey (and keys of other algorithms)
Or at least Marshal panics when it encounters a public field without the `ssh:"skip"` tag on an unsupported type. | NeedsInvestigation | low | Critical |
2,600,195,877 | deno | node:http IncomingMessage and ServerResponse don't emit close event | Version: Deno 2.0.1
`node:http`'s `IncomingMessage` and `ServerResponse` streams don't emit the `close` event.
Reproduction:
```js
import { createServer } from "node:http";
createServer((req, res) => {
req.once("close", () => {
console.log("Request closed");
});
res.once("close", () => {
console.log("Response closed");
});
}).listen(3000);
```
Run the above script with `deno run -A close.js`, and then, in a separate terminal, run the following:
```
curl --request GET --no-buffer http://localhost:3000
```
Then cancel the request by pressing Ctrl+C. Node will output the expected messages while Deno won't output anything.
This is important because it can cause resource leaks when using server-sent events and similar long polling responses. | bug,node compat,node:http | low | Minor |
2,600,202,975 | deno | Installation fails on Windows | Version: Deno 2.0.2
On Windows 11, I'm running the install command using PowerShell:
`irm https://deno.land/install.ps1 | iex`
The command fails, giving this error:
```
curl: (35) schannel: next InitializeSecurityContext failed: CRYPT_E_NO_REVOCATION_CHECK (0x80092012)
+ CategoryInfo : ObjectNotFound: (C:\Users\MyUser\.deno\bin\deno.zip:String) [Remove-Item], ItemNotFoundException
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.RemoveItemCommand
```
| windows,needs info | low | Critical |
2,600,217,931 | deno | `deno add npm:@aws-cdk/aws-ecs` appears to hang forever | Version: Deno 2.0.0-2.0.2
Based on the debug logs, it appears to be some kind of issues with deduplicating dependencies as it does resolution. | bug,install | low | Critical |
2,600,218,633 | ollama | New professional model for analyzing images of human organs | A digital vision model that shows excellent results in image analysis https://developer.nvidia.com/blog/ai-medical-imagery-model-offers-fast-cost-efficient-expert-analysis/
Here is the model itself https://github.com/cozygene/SLIViT | model request | low | Minor |
2,600,254,756 | pytorch | Internal assert triggered at ivalue_inl.h:1966, Expected IntList but got Int | ### ๐ Describe the bug
When using `torch.jit.script(model, dummy_input)`, I got the error
```
Traceback (most recent call last):
File "/data1/xuyuheng/codes/onnx/sr/5_export_onnx.py", line 29, in <module>
scripted = torch.jit.script(model_G, dummy_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_script.py", line 1432, in script
return _script_impl(
^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_script.py", line 1146, in _script_impl
return torch.jit._recursive.create_script_module(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 559, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 632, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_script.py", line 649, in _construct
init_fn(script_module)
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 608, in init_fn
scripted = create_script_module_impl(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 632, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_script.py", line 649, in _construct
init_fn(script_module)
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 608, in init_fn
scripted = create_script_module_impl(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 632, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_script.py", line 649, in _construct
init_fn(script_module)
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 608, in init_fn
scripted = create_script_module_impl(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 632, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_script.py", line 649, in _construct
init_fn(script_module)
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 608, in init_fn
scripted = create_script_module_impl(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 636, in create_script_module_impl
create_methods_and_properties_from_stubs(
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 468, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 1037, in compile_unbound_method
create_methods_and_properties_from_stubs(concrete_type, (stub,), ())
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 468, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 1037, in compile_unbound_method
create_methods_and_properties_from_stubs(concrete_type, (stub,), ())
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 468, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_recursive.py", line 1004, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_script.py", line 1432, in script
return _script_impl(
^^^^^^^^^^^^^
File "/home/xuyuheng/miniconda3/envs/pytorch/lib/python3.12/site-packages/torch/jit/_script.py", line 1204, in _script_impl
fn = torch._C._jit_script_compile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: isIntList() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1720538455419/work/aten/src/ATen/core/ivalue_inl.h":1966, please report a bug to PyTorch. Expected IntList but got Int
```
I'm sorry I can't provide an example currently, but by commenting and uncommenting code I found that a line
```python3
x = F.unfold(x, K)
```
is causing the problem; comment it and I wouldn't hit pytorch internal assertions. However, just that line itself in a single model doesn't trigger the fault.
EDIT: Now I have an example, see below
### Versions
```shell
python -m torch.utils.collect_env
<frozen runpy>:128: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (GCC) 13.3.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.12.2 | packaged by Anaconda, Inc. | (main, Feb 27 2024, 17:35:02) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.0.76
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 525.89.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 900.000
CPU max MHz: 2401.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.0
[pip3] onnxruntime-gpu==1.17.1
[pip3] pytorch-lightning==2.3.3
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 11.8.0 h6a678d5_0 defaults
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344 defaults
[conda] mkl-service 2.4.0 py312h5eee18b_1 defaults
[conda] mkl_fft 1.3.8 py312h5eee18b_0 defaults
[conda] mkl_random 1.2.4 py312hdb19cb5_0 defaults
[conda] numpy 1.26.4 py312hc5e2394_0 defaults
[conda] numpy-base 1.26.4 py312h0da6c21_0 defaults
[conda] pytorch 2.4.0 py3.12_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-lightning 2.3.3 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.0 py312_cu121 pytorch
[conda] torchmetrics 1.4.0.post0 pyhd8ed1ab_0 conda-forge
[conda] torchtriton 3.0.0 py312 pytorch
[conda] torchvision 0.19.0 py312_cu121 pytorch
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @malfet | oncall: jit,module: onnx,module: error checking,triaged | low | Critical |
2,600,265,401 | godot | Corruption of files during system power loss or system crash | ### Tested versions
Reproducible in 4.3 stable and 4.4 at HEAD. Looking at the source history, I believe this bug has existed since at least 2014.
### System information
Windows 11 and Ubuntu 24
### Issue description
Godot's `FileAccess` is used to both save resources in the editor and to save game state by game developers. To reduce the risk of files being left in an intermediate state in the event of an error, `FileAccess` is able to write to a temporary file, then moves that file on top of the existing file. This is the default behavior in the editor and in any game where `OS.set_use_file_access_save_and_swap(true)` is used. While this is good enough to protect against errors and crashes in Godot itself, it does not provide an atomic operation that protects against power loss or crash of the operating system.
Before renaming the temporary file, it's essential to ensure that the newly written contents have actually been committed to the underlying storage and aren't still sitting in the OS buffers. Otherwise, the effects of the rename operation may be written to disk before the contents of the file. Power loss or OS crash during this state could leave a partially written file in place of the original, with no direct way to recover the original.
On POSIX systems, commit to underlying storage can be accomplished with the `fsync()` system call. When called with a file descriptor, `fsync()` will block until all outstanding writes associated with the files descriptor have been acknowledged by the underlying storage device as being stable against power loss. On Windows, the equivalent of `fsync()` is `FlushFileBuffers()`.
Note that `fflush()` is distinct from `fsync()`. The former operates between the process and the operating system, and the later between the operating system and the storage device.
Examples of other libraries and applications properly using fsync() after writing to a temporary file but before renaming it:
* [GLib](https://docs.gtk.org/glib/func.file_set_contents_full.html)
* [Git](https://github.com/git/git/blob/4590f2e9412378c61eac95966709c78766d326ba/refs/files-backend.c#L1874)
* [Neovim](https://github.com/neovim/neovim/blob/61e9137394fc5229e582a64316c2ffef55d8d7af/src/nvim/os/fileio.c#L165)
* [Qt](https://codebrowser.dev/qt5/qtbase/src/corelib/io/qsavefile.cpp.html#338)
I noticed this problem with scrolling on Reddit. [One post](https://old.reddit.com/r/godot/comments/1em3wiy/psa_be_warned_sudden_power_loss_can_irreparably/) told the story of how a project was ruined because their computer lost power while saving. Immediately upon reading, it stuck me as a classic case of failing to sync the filesystem when attempting to do atomic writes. Looking at `FileAccess`, my suspicions were confirmed. With a trivial search, I was able to find [another post](https://old.reddit.com/r/godot/comments/z7vr0o/godot_4_weird_data_loss_after_power_cut/) where the exact same thing happened.
The responses from other users to these posts is generally to admonish the poster to use source control. While using source control is important, they are missing that this issue isn't specific to the editor; it can corrupt game save files as well.
### Steps to reproduce
#### VM Setup
Reproduction requires simulating a system failure. I did this with VMs in VirtualBox and a USB flash drive. Using a thumb drive slows down disk operations compared to my high speed internal SSD and makes it much easier to hit the race condition between writing the file contents and renaming the file. With VirtualBox, it's easy to pass a single USB thumb drive through to the guest operating system.
For Ubuntu, I formatted the drive as ext4.
For Windows, I formatted the thumb drive with NTFS. Additionally, I had to get the Windows guest operating system to treat the thumb drive like an internal hard disk rather than an external device that could be removed at any time, meaning writes should be cached by the operating system. This is done by opening Device Manager in the Windows guest, identifying the correct USB drive under _Disk Drives_, right clicking it and selecting **Properties**, going to the policy tab, and changing the _Removal Policy_ from _Quick removal_ to _Better performance_.
Ubuntu inside of VirtualBox sometimes hanged on boot after the hypervisor reset the guest. Resetting the guest again was effective in getting a good reboot.
I was unable to get 3D acceleration working in the Windows VM guest, so Godot was unable to initialize OpenGL in the MRP. To workaround this, I hacked a simple command line interface into to the MRP than can be used in Godot's headless mode. Simply run `godot --headless` followed by one or more of the commands listed below.
#### MRP
The provided reproduction project provides a simple GUI that pseudo-randomly generates two different 100 MB files: file A and file B. Then, either file A or file B to be copied to third file: file C. Finally, file C can be compared against either file A or file B. Files A, B, and C are all placed in the project directory.
1. Place the MRP on the thumb drive and mount it in the guest.
2. Launch the project and generate both files A and B. In headless mode, use the `gen_a` and `gen_b` commands.
3. Push the button to copy file A to file C. For headless, use `copy_a`.
4. On Ubuntu, run the `fsync` command. On Windows, wait 30 seconds.
5. Push the button to copy file B to file C. For headless, use `copy_b`.
6. When the interface indicates that that the copy is complete, wait ~4 seconds. The exact time to wait will depend on the system and may take some tuning.
7. When the designated waiting time has elapsed, **immediately** have the hypervisor reset the guest. In Virtualbox, this is done by pressing the <kbd>Host</kbd>+<kbd>R</kbd> key combination. It may be necessary to disable a warning dialog.
8. After rebooting, run the MRP again.
9. Compare file C to both file A and file B. If file C should matches _either_ file A or file B. If file C matches neither file A nor file B, then it has been corrupted. For headless, the comparison can be done with `cmp_a` and `cmp_b`.
10. If no corruption is found, repeat the process by copying whichever file doesn't match file C. Go to step 6.
On Ubuntu, I'm able to repeat the corruption in file C in about 1 out of 4 tries. on Ubuntu. On Windows, I can repeat the corruption on almost every try.
### Minimal reproduction project (MRP)
[godot-nosync-repro.zip](https://github.com/user-attachments/files/17449774/godot-nosync-repro.zip)
| topic:core,needs testing | low | Critical |
2,600,303,932 | neovim | :substitute \= has unexpected behavior | ### Problem
When using \=expression in substitute command, the expression runs before the user hits enter, and causing unexpected behavior.
https://github.com/user-attachments/assets/46384d0d-78ed-4f25-84a9-4a3eccfd7d1d
### Steps to reproduce
Type these keys:
```
nvim --clean
aa<esc>
:%s/a/\=setreg("a",submatch(0))<esc>
:reg
```
The output is:
```
Type Name Content
c "a a
c "* :%s/a/\=setreg("a",submatch(0))
c "+ :%s/a/\=setreg("a",submatch(0))
c ". a
```
Note the "a is filled by substitute command, even the user doesn't actually run it.
### Expected behavior
Prevent side effect when using expression substitution.
### Nvim version (nvim -v)
NVIM v0.10.2 Build type: Release LuaJIT 2.1.1713484068
### Vim (not Nvim) behaves the same?
no, since vim doesn't show substitution result in real-time.
### Operating system/version
6.11.3-arch1-1
### Terminal name/version
wezterm 20240203-110809-5046fc22
### $TERM environment variable
xterm-256color
### Installation
bob install stable | bug,inccommand | low | Minor |
2,600,307,204 | go | x/tools/gopls: handle undefined struct type in `undeclaredname` autofix code action | ### gopls version
golang.org/x/tools/gopls v0.16.2
### go env
```shell
not relevant
```
### What did you do?
I ran an autofix code action for an undefined struct.

### What did you see happen?
`gopls` generated a new variable instead of a struct type definition.
This is nonsensical from a user point of view & also doesn't compile.

### What did you expect to see?
A struct type definition above the function body.
<img width="966" alt="image" src="https://github.com/user-attachments/assets/ace05132-91ca-4eaa-a8f2-9bd071c6919e">
### Editor and settings
_No response_
### Logs
_No response_ | FeatureRequest,gopls,Tools | low | Major |
2,600,345,334 | yt-dlp | Broken site: NHL.com | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Region
Finland/Europe
### Provide a description that is worded well enough to be understood
Trying to download form nhl.com throws the error "Unsupported URL"
There is one open issue related to NHL.com (https://github.com/yt-dlp/yt-dlp/issues/1933), yet that seems to be related to failing with a shortened link. The site error I am reporting refers to the full URL, hence I am opening a new issue.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.nhl.com/video/min-cbj-rossi-scores-goal-against-daniil-tarasov-6363502437112']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.10.19.232833 from yt-dlp/yt-dlp-nightly-builds [679c68240] (pip)
[debug] Python 3.12.6 (CPython x86_64 64bit) - Linux-6.11.2-amd64-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.0.2-3 (setts), ffprobe 7.0.2-3
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.10.19.232833 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.10.19.232833 from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://www.nhl.com/video/min-cbj-rossi-scores-goal-against-daniil-tarasov-6363502437112
[generic] min-cbj-rossi-scores-goal-against-daniil-tarasov-6363502437112: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] min-cbj-rossi-scores-goal-against-daniil-tarasov-6363502437112: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.nhl.com/video/min-cbj-rossi-scores-goal-against-daniil-tarasov-6363502437112
Traceback (most recent call last):
File "/home/user/.python3_venv/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.python3_venv/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/home/user/.python3_venv/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 741, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.python3_venv/lib/python3.12/site-packages/yt_dlp/extractor/generic.py", line 2533, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.nhl.com/video/min-cbj-rossi-scores-goal-against-daniil-tarasov-6363502437112
```
| site-bug,triage | low | Critical |
2,600,372,452 | yt-dlp | Add support for Anime Onegai | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Latin America
### Example URLs
[Anime Onegai (Spanish)](https://www.animeonegai.com/es/landing)
[Anime Onegai (Portuguese)](https://www.animeonegai.com/pt/landing)
### Provide a description that is worded well enough to be understood
A Latin American paid anime streaming site.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.animeonegai.com/pt/watch/vVqDkyopdYybzc8PB?serie=true']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.10.07 from yt-dlp/yt-dlp [1a176d874] (pip)
[debug] Python 3.12.1 (CPython x86_64 64bit) - macOS-11.7.10-x86_64-i386-64bit (OpenSSL 3.0.11 19 Sep 2023)
[debug] exe versions: ffmpeg N-109963-g912ac82a3c-tessus (setts), ffprobe 5.0.1-tessus, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.32.3, sqlite3-3.43.1, urllib3-2.2.1, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.07 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.07 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.animeonegai.com/pt/watch/vVqDkyopdYybzc8PB?serie=true
[generic] vVqDkyopdYybzc8PB?serie=true: Downloading webpage
ERROR: [generic] Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 741, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/yt_dlp/extractor/generic.py", line 2384, in _real_extract
full_response = self._request_webpage(url, video_id, headers=filter_dict({
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 910, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 897, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query, extensions))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 4172, in urlopen
return self._request_director.send(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/yt_dlp/networking/_requests.py", line 365, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
```
| site-request,triage | low | Critical |
2,600,385,328 | godot | writing to `POSITION` gives wrong `VIEW` in fragment shader | ### Tested versions
stable 4.3
### System information
Kubuntu 22.04
### Issue description
If you write to `POSITION` to make a full screen post processor, then your `VIEW` vector will become invalid and you have to calculate it manually.
What i observe that if i write to `POSITION` then the `VIEW` variable will become affected by camera translation.
### Steps to reproduce
write to POSITION
### Minimal reproduction project (MRP)
[viewbugreport.zip](https://github.com/user-attachments/files/17450091/viewbugreport.zip)
| enhancement,discussion,documentation,topic:shaders | low | Critical |
2,600,401,472 | pytorch | batch_first support for multi_head_attention_forward | ### ๐ The feature, motivation and pitch
I want to pass in batch/B/N as the first dimension. torch.nn.MultiheadAttention module supports this via the `batch_first` argument.
However the functional version of multihead attention i.e. multi_head_attention_forward() in [pytorch/torch/nn/functional.py](https://github.com/pytorch/pytorch/tree/main) does not take that argument.
I know that I can transpose the dimensions to have it work like so:
```
t_first_x = x.permute(1,0,2)
torchfnc = torch.nn.functional.multi_head_attention_forward(t_first_x,t_first_x,t_first_x
...)
```
but I thought I should be a simple addition to the function to make it consistent with non-functional one.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @bhosmer @cpuhrsch @erichan1 @drisspg | module: nn,triaged,module: sdpa | low | Minor |
2,600,406,001 | kubernetes | OpenAPI Markdown transformation doesn't handle code blocks well | We should [render code blocks properly in OpenAPI](https://github.com/kubernetes/kube-openapi/pull/482).
Right now we don't.
---
There is possibly an argument for not using code blocks in the API reference (for example, if we want equations, there may be better options), but equally we may want labelled code blocks where we put eg MathML or LaTex inside a block and have a downstream renderer do something clever).
For now, let's find a basic fix.
/wg api-expression | kind/bug,lifecycle/stale,wg/api-expression,needs-triage | low | Minor |
2,600,434,768 | flutter | [ios] Incorrect caps lock status while pressing other keys | ### Steps to reproduce
I can only test the simulator because I don't have an Apple developer account.
1. Start an iOS simulator on macOS
2. Run the sample code
3. Click `CapsLock` and some other letter keys
### Expected results
CapsLock state matches the physical keyboard connected to macOS.
### Actual results
The state is incorrect.
`CapsLock` events are triggered twice sometimes and do not trigger sometimes when clicking `CapsLock`.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Keyboard Event Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const KeyboardEventPage(),
);
}
}
class KeyboardEventPage extends StatefulWidget {
const KeyboardEventPage({super.key});
@override
_KeyboardEventPageState createState() => _KeyboardEventPageState();
}
class _KeyboardEventPageState extends State<KeyboardEventPage> {
int _idx = 0;
final List<String> _lastKeyEvents = [];
final FocusNode _focusNode = FocusNode();
@override
Widget build(BuildContext context) {
final child = Scaffold(
appBar: AppBar(
title: const Text('Keyboard Event Demo'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'Press any key:',
style: TextStyle(fontSize: 20),
),
const SizedBox(height: 20),
for (final keyEvent in _lastKeyEvents)
Text(
keyEvent,
style: const TextStyle(fontSize: 16),
),
],
),
),
);
return FocusScope(
autofocus: true,
child: Focus(
autofocus: true,
canRequestFocus: true,
focusNode: _focusNode,
onKeyEvent: (node, event) {
if (event is KeyDownEvent) {
setState(() {
var capsLock = false;
if (HardwareKeyboard.instance.lockModesEnabled
.contains(KeyboardLockMode.capsLock)) {
capsLock = true;
}
_lastKeyEvents.add('$_idx: CapsLock: $capsLock, event: ${event.logicalKey.keyLabel}, ${event.logicalKey.debugName}');
_lastKeyEvents.add('');
_idx++;
if (_lastKeyEvents.length > 8) {
_lastKeyEvents.removeAt(0);
}
});
}
return KeyEventResult.handled;
},
child: child,
),
);
}
}
```
</details>
### Screenshots or Video
I click `CapsLock` one time, but there're two events `24` and `25`.
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale en-CN)
โข Flutter version 3.24.3 on channel stable at /Users/rustdesk/workspace/devenv/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 2663184aa7 (6 weeks ago), 2024-09-11 16:27:48 -0500
โข Engine revision 36335019a8
โข Dart version 3.5.3
โข DevTools version 2.37.3
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
โข Android SDK at /Users/rustdesk/Library/Android/sdk
โ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
โ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[โ] Xcode - develop for iOS and macOS (Xcode 15.2)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15C500b
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2022.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b802.4-9586694)
[โ] VS Code (version 1.94.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.98.0
[!] Proxy Configuration
โข HTTP_PROXY is set
! NO_PROXY is not set
[โ] Connected device (4 available)
โข iPhone SE (3rd generation) (mobile) โข 38639D5D-28FC-44BB-A81F-A4336E3E4DC4 โข ios โข
com.apple.CoreSimulator.SimRuntime.iOS-17-2 (simulator)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.6.1 23G93 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.6.1 23G93 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 123.0.6312.123
[โ] Network resources
โข All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
| platform-ios,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.27 | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.