id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,609,027,514 | terminal | Running WT portable as a different user on W11 24H2 completely freezes initiating users logon session | ### Windows Terminal version
1.21.2911.0
Happens also with previous versions both stable and preview but all portable.
### Windows build number
10.0.26100.2033
### Other Software
Default profile can be either Windows PowerShell or PowerShell
### Steps to reproduce
Windows 11 24H2 fully patched and domain joined.
WT portable is in stalled to root of local drive e.g. C:\WT or D:\WT
Logon as a domain user
Start windowsterminal.exe > right-click > Run as a different user - account is in initiated with YubiKey smartcard logon.
Type a command e.g. get-adcomputer xxx123
Whole logon session for initiating users freezes and becomes unusable.
Sometime a remote logoff of frozen logon can be forced, and the computer returns to normal operation. User can log back on and work normally. Sometimes a hard reboot is needed.
We have verified this happens on several different computers and users.
This can happen after in place upgrade from 23H2 to 24H2 via Windows Update or with completely clean 24H2 install.
It does not happen on prior versions of the OS 23H2.
Running other applications as a different user does not provoke this behavior.
Next troubleshooting step will be to determine if it is related to just running as a different user or specifically to a different user using a smartcard.
### Expected Behavior
WT works normally when run as another user and does not freeze the initiating account logon.
### Actual Behavior
Whole logon session for initiating users freezes and becomes unusable.
Sometime a remote logoff of frozen logon can be forced, and the computer returns to normal operation. User can log back on and work normally. Sometimes a hard reboot is needed. | Issue-Bug,Product-Terminal,Needs-Tag-Fix,Priority-1 | medium | Major |
2,609,028,120 | langchain | PowerBIDataSet credential not working: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs() | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from azure.identity import DefaultAzureCredential
from langchain_community.agent_toolkits import PowerBIToolkit, create_pbi_agent
from langchain_community.utilities.powerbi import PowerBIDataset
from langchain_openai import ChatOpenAI
fast_llm = ChatOpenAI(
temperature=0.5, max_tokens=1000, model_name="gpt-3.5-turbo", verbose=True
)
smart_llm = ChatOpenAI(temperature=0, max_tokens=100, model_name="gpt-4", verbose=True)
toolkit = PowerBIToolkit(
powerbi=PowerBIDataset(
dataset_id="AdvWorksTest",
table_names=["SalesLT SalesOrderHeader", "SalesLT SalesOrderDetail"],
credential=DefaultAzureCredential(),
),
llm=smart_llm,
)
agent_executor = create_pbi_agent(
llm=fast_llm,
toolkit=toolkit,
verbose=True,
)
```
### Error Message and Stack Trace (if applicable)
```python
{
"name": "ConfigError",
"message": "field \"credential\" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs().",
"stack": "---------------------------------------------------------------------------
ConfigError Traceback (most recent call last)
Cell In[30], line 7
1 fast_llm = ChatOpenAI(
2 temperature=0.5, max_tokens=1000, model_name=\"gpt-3.5-turbo\", verbose=True, api_key=OPEN_AI_API_KEY
3 )
4 smart_llm = ChatOpenAI(temperature=0, max_tokens=100, model_name=\"gpt-4\", verbose=True, api_key=OPEN_AI_API_KEY)
6 toolkit = PowerBIToolkit(
----> 7 powerbi=PowerBIDataset(
8 dataset_id=\"AdvWorksTest\",
9 table_names=[\"SalesLT SalesOrderHeader\", \"SalesLT SalesOrderDetail\"],
10 credential=DefaultAzureCredential(),
11 ),
12 llm=smart_llm,
13 )
15 agent_executor = create_pbi_agent(
16 llm=fast_llm,
17 toolkit=toolkit,
18 verbose=True,
19 )
File c:\\Users\\jerschi\\projects\\.conda\\Lib\\site-packages\\pydantic\\v1\\main.py:339, in BaseModel.__init__(__pydantic_self__, **data)
333 \"\"\"
334 Create a new model by parsing and validating input data from keyword arguments.
335
336 Raises ValidationError if the input data cannot be parsed to form a valid model.
337 \"\"\"
338 # Uses something other than `self` the first arg to allow \"self\" as a settable attribute
--> 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
341 raise validation_error
File c:\\Users\\jerschi\\projects\\.conda\\Lib\\site-packages\\pydantic\\v1\\main.py:1074, in validate_model(model, input_data, cls)
1071 if check_extra:
1072 names_used.add(field.name if using_name else field.alias)
-> 1074 v_, errors_ = field.validate(value, values, loc=field.alias, cls=cls_)
1075 if isinstance(errors_, ErrorWrapper):
1076 errors.append(errors_)
File c:\\Users\\jerschi\\projects\\.conda\\Lib\\site-packages\\pydantic\\v1\\fields.py:857, in ModelField.validate(self, v, values, loc, cls)
855 if self.type_.__class__ is ForwardRef:
856 assert cls is not None
--> 857 raise ConfigError(
858 f'field \"{self.name}\" not yet prepared so type is still a ForwardRef, '
859 f'you might need to call {cls.__name__}.update_forward_refs().'
860 )
862 errors: Optional['ErrorList']
863 if self.pre_validators:
ConfigError: field \"credential\" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs()."
}
```
### Description
Hello Team,
I'm trying to use the PowerBIToolkit to connect to my PowerBI report and prompt a llm. Unfortunately, I get the following error message:
"ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs()."
A quick google search showed me that this was an issue before (https://github.com/langchain-ai/langchain/issues/9823 and https://github.com/langchain-ai/langchain/issues/4325) that has been resolved, but somehow reverted. The other issues are closed, so I opened a new one to get some attention on this.
There is a workaround that works locally, but I need a solution that works on an application level.
From what I found, the error lies in https://github.com/hwchase17/langchain/blob/master/langchain/utilities/powerbi.py (based on some comments and this https://github.com/Azure/azure-sdk-for-python/issues/30288)
Best,
jerschi
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.9 | packaged by Anaconda, Inc. | (main, Apr 19 2024, 16:40:41) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.12
> langchain: 0.3.4
> langchain_community: 0.3.3
> langsmith: 0.1.137
> langchain_chroma: 0.1.3
> langchain_openai: 0.2.3
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: Installed. No version info available.
> chromadb: 0.5.3
> dataclasses-json: 0.6.7
> fastapi: 0.112.2
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.52.1
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.32
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2 | 🤖:bug,stale | low | Critical |
2,609,053,980 | godot | Godot Editor randomly hangs, giving error `Condition "err != VK_SUCCESS" is true. Returning: FAILED' in the console` | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - integrated Intel(R) UHD Graphics (Intel Corporation; 27.20.100.9415) - Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz (8 Threads)
### Issue description
Recently, the engine started randomly freezing, forcing me to close and restart the editor. I have not gotten any crashes like this before the 4.3 update. After downgrading to 4.2.2 again the engine seems to be fine. Whenever the engine crashes, I get the following error message on the console:
```
ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
```
(The message is always shown exactly 6 times)
The freeze seems to happen randomly but always when I'm doing something (clicking on a button, moving the scene view, simply loading the project, ...). Sometimes the engine crashes immediately or within just a few minutes but sometimes it takes a bit longer.
### Steps to reproduce
As described above, the crash/freeze seems to occur randomly and I have not found any pattern besides the fact that it only seems to occur when I do something.
### Minimal reproduction project (MRP)
N/A | topic:rendering,topic:editor,needs testing | low | Critical |
2,609,112,197 | godot | Regression in Grid Snapping + Ruler on Godot 3.6, becomes inaccurate when zooming in | ### Tested versions
- Reproducible in Godot v3.6.stable.official [de2f0f147]
- Works fine in Godot 3.5.x
### System information
Fedora 40 Linux, tested on both AMD Vega + Intel Iris GPUs
### Issue description
This appears to be a regression of this issue in 3.6: godotengine/godot#70186
When working with a Tilemap, of any size but say 64x64, and then you zoom in with a grid set to 64x64, the grid + ruler become inaccurate the more you zoom in, and it becomes more noticeable and less accurate the further right you go from the origin Vector2(0,0)
### Steps to reproduce
Here's a screencast showing this issue from my own project.
[grid-inaccurate-zoomed-godot-3.6.webm](https://github.com/user-attachments/assets/4044fe97-8d68-465f-b70f-253631d026f5)
### Minimal reproduction project (MRP)
Here's a simple project with a tilemap you can use to see the issue, just turn on grid snap the same size as tilemap, go a bit right from the origin and zoom in a lot, and you will notice the ruler + grid becoming more inaccurate
[SnapProblem.zip](https://github.com/user-attachments/files/17494925/SnapProblem.zip)
| bug,topic:editor | low | Minor |
2,609,118,337 | react-native | [0.76][iOS][Codegen] Fabric Native Component from website has a bug | ### Description
Currently [codegen](https://reactnative.dev/docs/next/fabric-native-components-introduction#1-run-codegen) generates `RCTThirdPartyFabricComponents` wrappers with references to the Fabric Components. If these components are inlined in your application, they fail to build (as the they're linked in the Pods and not the App, so can't find the symbols). You'll see an error similar to this (except your component class will be missing):
```stacktrace
Undefined symbols for architecture arm64:
"_CustomWebViewCls", referenced from:
_RCTThirdPartyFabricComponentsProvider in libReact-RCTFabric.a[41](RCTThirdPartyFabricComponentsProvider.o)
ld: symbol(s) not found for architecture arm64
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
```
We're expecting a fix to ship in 0.76.1 (#47176), but leaving this issue here as a placeholder until that's the case.
## Appendix:
See commented out lines, which you'll have to do until the fix ships.
```objc title="./node_modules/react-native/React/Fabric/RCTThirdPartyFabricComponentsProvider.mm"
/*
* This code was generated by [react-native-codegen](https://www.npmjs.com/package/react-native-codegen).
*
* Do not edit this file as changes may cause incorrect behavior and will be lost
* once the code is regenerated.
*
* @generated by GenerateRCTThirdPartyFabricComponentsProviderH
*/
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wreturn-type-c-linkage"
#import <React/RCTComponentViewProtocol.h>
#ifdef __cplusplus
extern "C" {
#endif
Class<RCTComponentViewProtocol> RCTThirdPartyFabricComponentsProvider(const char *name);
#if RCT_NEW_ARCH_ENABLED
#ifndef RCT_DYNAMIC_FRAMEWORKS
//Class<RCTComponentViewProtocol> CustomWebViewCls(void) __attribute__((used)); // 0
#endif
#endif
#ifdef __cplusplus
}
#endif
#pragma GCC diagnostic pop
```
and
```objc title="./node_modules/react-native/React/Fabric/RCTThirdPartyFabricComponentsProvider.h"
/**
* This code was generated by [react-native-codegen](https://www.npmjs.com/package/react-native-codegen).
*
* Do not edit this file as changes may cause incorrect behavior and will be lost
* once the code is regenerated.
*
* @generated by GenerateRCTThirdPartyFabricComponentsProviderCpp
*/
// OSS-compatibility layer
#import "RCTThirdPartyFabricComponentsProvider.h"
#import <string>
#import <unordered_map>
Class<RCTComponentViewProtocol> RCTThirdPartyFabricComponentsProvider(const char *name) {
static std::unordered_map<std::string, Class (*)(void)> sFabricComponentsClassMap = {
#if RCT_NEW_ARCH_ENABLED
#ifndef RCT_DYNAMIC_FRAMEWORKS
// {"CustomWebView", CustomWebViewCls}, // 0
#endif
#endif
};
auto p = sFabricComponentsClassMap.find(name);
if (p != sFabricComponentsClassMap.end()) {
auto classFunc = p->second;
return classFunc();
}
return nil;
}
```
### Steps to reproduce
N/A
### React Native Version
0.76.0
### Affected Platforms
Runtime - iOS, Build - MacOS
### Output of `npx react-native info`
```text
N/A
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://github.com/cipolleschi/InAppComponent
### Screenshots and Videos

| Platform: iOS,Resolution: PR Submitted | low | Critical |
2,609,175,806 | vscode | Git - diff view does not work for PDF files |
Type: <b>Bug</b>
Steps to reproduce:
1. Create a Git repository and put a PDF file into it.
2. Commit the PDF file.
3. Change the PDF file.
4. Repeat 2. and 3.
5. In the _Source Control_ view, click the PDF filename.
6. Click _Open Anyway_ and select the default text editor.
7. Two columns open that should show the old and the new version of the PDF. However, only the new version on the right is displayed correctly. On the left, only one character (``, reads like `FF`) is displayed. (The two PDF file versions are almost identical in my example.)


8. Now, in the _Source Control Graph_, click the second commit from 4.
9. This time, Both sides of the diff only show the character from before.

It seems like the wrong content is shown for PDFs that are not currently checked out.
----
Note: In practice, one would usually not look at PDFs like this, of course. One would use a PDF viewer extension. However, this bug causes the PDF viewers to also fail to show the old versions as described e.g. in https://github.com/tomoki1207/vscode-pdfviewer/issues/70, since they do not receive the correct PDF. The steps above show that the issue does not lie with the extensions.
----
VS Code version: Code - Insiders 1.95.0-insider (fe997185b5e6db94693ed6ef5456cfa4e8211edf, 2024-10-23T05:06:13.568Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-9500 CPU @ 3.00GHz (6 x 3000)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.78GB (4.50GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details>Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256198
vsc_aacf:30263846
vscod805cf:30301675
vsaa593:30376534
py29gd2263:31024238
c4g48928:30535728
a9j8j154:30646983
962ge761:30841072
pythongtdpath:30726887
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
da93g388:31013173
dvdeprecation:31040973
dwnewjupyter:31046869
nb_pri_only:31057983
nativerepl1:31134653
refactort:31084545
pythonrstrctxt:31093868
wkspc-onlycs-t:31132770
nativeloc1:31118317
wkspc-ranged-t:31151552
cf971741:31144450
e80f6927:31120813
autoexpandse:31146404
12bdf347:31141542
iacca1:31150324
notype1:31143044
dwcopilot:31158714
g7688163:31155431
```
</details>
<!-- generated by issue reporter --> | bug,git | low | Critical |
2,609,248,574 | TypeScript | Invalid import path rewrite in declaration file | ### 🔎 Search Terms
"import paths", "imports rewrite"
### 🕗 Version & Regression Information
- This changed between versions 5.5.4 and 5.6.3
### ⏯ Repository with repro
https://github.com/MariaSolOs/path-rewrite-bug
### 💻 Code
The linked repository contains all of the necessary code to reproduce the issue. This is a pnpm monorepo with 4 packages.
`mod1` simply exports the following interface and makes it available outside the package:
```ts
// mod1/index.ts
export interface Foo {
foo: string;
}
```
```jsonc
// mod1/package.json
{
...
"exports": {
".": {
"types": "./dist/index.d.ts",
"default": "./dist/index.js"
}
}
}
```
`mod2` depends on `mod1` and exports its types.
```ts
// mod2/index.ts
import { Foo } from "mod1";
export const createFoo = (foo: string): Foo => ({ foo });
export type * from "mod1";
```
```jsonc
// mod2/package.json
{
...
"exports": {
".": {
"types": "./dist/index.d.ts",
"default": "./dist/index.js"
}
},
"dependencies": {
"mod1": "workspace:*"
}
}
```
`mod3` depends on `mod2` and exports a simple `getFoo` function using the imported function from `mod2`:
```ts
// mod3/index.ts
import { createFoo } from "mod2";
export const getFoo = () => createFoo("foo");
```
```jsonc
// mod3/package.json
{
...
"exports": {
".": {
"types": "./dist/index.d.ts",
"default": "./dist/index.js"
}
},
"dependencies": {
"mod2": "workspace:*"
}
}
```
Finally, `mod4` depends on `mod3` and uses `getFoo` to create another function:
```ts
// mod4/index.ts
import { getFoo } from "mod3";
export const foo = getFoo();
```
```jsonc
// mod4/package.json
{
...
"exports": {
".": {
"types": "./dist/index.d.ts",
"default": "./dist/index.js"
}
},
"dependencies": {
"mod3": "workspace:*"
}
}
```
### 🙁 Actual behavior
Because `mod3` has an invalid `index.d.ts`, `mod4` fails compilation when doing a lib check of `mod3`'s type declaration files. This is the invalid declaration file from `mod3`:
```ts
export declare const getFoo: () => import("mod1").Foo;
//# sourceMappingURL=index.d.ts.map
```
### 🙂 Expected behavior
For the behavior from previous versions of TypeScript to be maintained: Since `mod2` re-exports `mod1`s types, and since `mod3` depends on `mod2`, compiling `mod3` should produce this:
```ts
export declare const getFoo: () => import("mod2").Foo;
//# sourceMappingURL=index.d.ts.map
```
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Critical |
2,609,263,131 | react | [Compiler Bug]: TS types missing for `eslint-plugin-react-compiler` | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://www.typescriptlang.org/play/?#code/JYWwDg9gTgLgBFApgQwMYwMIXMANoqOAMymzgHJEBnXYAOxgFoxcBXAc3saTSdWzB4C5ANwAoIA
### Repro steps
1. Configure `compilerOptions.checkJs = true` in `tsconfig.json`
2. Import from `eslint-plugin-react-compiler` in `eslint.config.js`
3. 💥 Observe the error below
```
Could not find a declaration file for module 'eslint-plugin-react-compiler'. 'node_modules/eslint-plugin-react-compiler/dist/index.js' implicitly has an 'any' type.
Try `npm i --save-dev @types/eslint-plugin-react-compiler` if it exists or add a new declaration (.d.ts) file containing `declare module 'eslint-plugin-react-compiler';`ts(7016)
```

### How often does this bug happen?
Every time
### What version of React are you using?
19.0.0-rc-69d4b800-20241021
### What version of React Compiler are you using?
19.0.0-beta-8a03594-20241020
### Alternatives Considered
I thought maybe there would be types at [`@types/eslint-plugin-react-compiler`](https://www.npmjs.com/package/@types/eslint-plugin-react-compiler) (DefinitelyTyped), but this does not exist
| Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,609,325,297 | go | x/build: test missing from the list of test events | I vote commit-queue in https://go-review.googlesource.com/c/go/+/621997 ,
See https://logs.chromium.org/logs/golang/buildbucket/cr-buildbucket/8733272018311763649/+/u/step/11/log/2
linux-amd64-race result like <code>Status for test slices.BenchmarkCompactFunc/sorted is missing from the list of test events. Setting to `pass` because package passed.</code> | Builders,NeedsInvestigation | low | Minor |
2,609,329,646 | angular | @defer is broken for dynamic component (maybe) | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
Yes
### Description
I have been trying to lazy-load a component in the browser during the idle state. The issue is that the component is inside an ng-template, and I am using "createComponent" to dynamically create a component (modal), and I'm projecting some dynamic content inside that. I am trying to lazy-load this dynamic content with @defer, but it doesn’t seem to work.
My question is: if the component inside the ng-template loads as eagerly without using @defer, why doesn't it lazy-load when I apply @defer(on idle)? The component only loads after I inject the ng-template content into the dynamic component and attach it to the DOM.
(Please close and ignore this issue if this is dumb - I'm just learning new stuffs in Angular.)


as you can see the component should have loaded on the browser idle state but instead it is getting loaded when the dynamic content is getting attached to the DOM.
### Please provide a link to a minimal reproduction of the bug
https://ng-spark-ebon.vercel.app/components/dialog
### Please provide the exception or error you saw
You can see in the Network tab that the component loads only after it is attached to the DOM, not while in an idle state
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 18.2.8
Node: 20.16.0
Package Manager: npm 10.8.1
OS: win32 x64
### Anything else?
_No response_ | area: core,core: defer | low | Critical |
2,609,335,109 | deno | Vite doesn't support Deno's `jsr:` imports | Deno version: deno --version
deno 2.0.2 (stable, release, x86_64-pc-windows-msvc)
v8 12.9.202.13-rusty
typescript 5.6.2
Here is a repo: https://github.com/Ciantic/deno-pure-solid-start
```
11.50.18 [vite] Error when evaluating SSR module C:/Source/JavaScript/solid-start-test/deno-solid-test/src/routes/index.tsx?pick=default&pick=$css: failed to import "jsr:@db/sqlite@0.12"
|- Error: Cannot find module 'jsr:@db/sqlite@0.12' imported from 'C:/Source/JavaScript/solid-start-test/deno-solid-test/src/db/db.ts'
at nodeImport (C:\Source\JavaScript\solid-start-test\deno-solid-test\node_modules\.deno\vite@5.4.9\node_modules\vite\dist\node\chunks\dep-Cyk9bIUq.js:53036:19)
at ssrImport (C:\Source\JavaScript\solid-start-test\deno-solid-test\node_modules\.deno\vite@5.4.9\node_modules\vite\dist\node\chunks\dep-Cyk9bIUq.js:52903:22)
at undefined
at async instantiateModule (C:\Source\JavaScript\solid-start-test\deno-solid-test\node_modules\.deno\vite@5.4.9\node_modules\vite\dist\node\chunks\dep-Cyk9bIUq.js:52961:5)
```
[Discord link](https://discord.com/channels/684898665143206084/1298570120653836289) | needs investigation | low | Critical |
2,609,335,871 | go | path/filepath: Walk/WalkDir susceptible to symlink race | The filepath.Walk and filepath.WalkDir functions are documented as not following symbolic links.
Both these functions are susceptible to a TOCTOU (time of check/time of use) race condition where a portion of the path being walked is replaced with a symbolic link while the walk is in progress.
The impact of this race condition is either mitigated or exacerbated (depending on your perspective) by the fact that the Walk/WalkDir API is fundamentally subject to TOCTOU races: Walk/WalkDir provides the names of files to a WalkFunc/WalkDirFunc, but the file may be replaced in between the WalkFunc/WalkDIrFunc being invoked and making use of the file name. This fundamental raciness means that a WalkFunc/WalkDirFunc that needs to defend against symlink traversal must use a traversal-resistant API to access files, such as github.com/google/safeopen or the proposed os.Root (#67002). Using a traversal-resistant file API will also defend against races in Walk/WalkDir itself.
Because of the inherent raciness of the Walk/WalkDir API, and the fact that fixing the TOCTOU vulnerability requires non-trivial implementation changes, we are classifying this as a PUBLIC track vulnerability.
This has been assigned CVE-2024-8244. | Security | low | Minor |
2,609,379,891 | pytorch | [inductor] cpp gemm autotune doesn't work on AMD EPYC | Repro:
```
python test/inductor/test_aot_inductor.py AOTInductorTestABICompatibleCpu.test_misc_1_max_autotune_True_cpu
```
Error:
```
File "/data/users/binbao/pytorch/torch/_inductor/codegen/common.py", line 2407, in maybe_append_choice
choices.append(self.generate(**kwargs))
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/_inductor/select_algorithm.py", line 1123, in generate
choice_caller = self._wrapped.generate(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/_inductor/codegen/cpp_template.py", line 48, in generate
code = kernel.render(self, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/_inductor/codegen/cpp_template_kernel.py", line 51, in render
template.render(kernel=self, **kwargs), self.render_hooks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/_inductor/codegen/cpp_gemm_template.py", line 1061, in render
self.log_blockings()
File "/data/users/binbao/pytorch/torch/_inductor/codegen/cpp_gemm_template.py", line 504, in log_blockings
log.debug(f"Cache blocking: {self.cache_blocking()}") # noqa: G004
^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 5, in cache_blocking_cache_on_self
File "/data/users/binbao/pytorch/torch/_inductor/codegen/cpp_gemm_template.py", line 497, in cache_blocking
return GemmBlocking(*get_cache_blocking(register_blocking, thread_blocking))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/_inductor/codegen/cpp_gemm_template.py", line 430, in get_cache_blocking
L1_cache_size > 0
torch._inductor.exc.LoweringException: AssertionError: Expect L1_cache_size > 0 but got 0
target: aten.addmm.default
args[0]: TensorBox(StorageBox(
ConstantBuffer(name='L__self___mlp_0_bias', layout=FixedLayout('cpu', torch.float32, size=[64], stride=[1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg9_1', layout=FixedLayout('cpu', torch.float32, size=[16, 128], stride=[128, 1]))
))
args[2]: TensorBox(
ReinterpretView(
StorageBox(
ConstantBuffer(name='L__self___mlp_0_weight', layout=FixedLayout('cpu', torch.float32, size=[64, 128], stride=[128, 1]))
),
FixedLayout('cpu', torch.float32, size=[128, 64], stride=[1, 128]),
origins=OrderedSet([permute])
)
)
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @chauhang @penguinwu | module: cpu,triaged,oncall: pt2 | low | Critical |
2,609,397,465 | flutter | [ios]iPadGestureTests in scenario test app failure | ### Use case
Test: https://github.com/flutter/engine/blob/4e6405e2a89fa8fda2c0e7017ddcd8b39d7b3052/testing/scenario_app/ios/Scenarios/ScenariosUITests/iPadGestureTests.m
2 problems:
1. it fails
2. it's not running on CI
### Proposal
- Fix the failure
- Run on CI (otherwise the test isn't really helpful) | platform-ios,engine,P2,team-ios,triaged-ios | low | Critical |
2,609,435,342 | vscode | focusing cell will show previous cursor location | Steps to Reproduce:
1. Place cursor in markdown cell, somewhere in the NOT visible area of the screen
2. Click out of the cell
3. Try to double click somewhere in the visible area of the screen to highlight text
🐛Cell scrolls to the previously placed cursor area (which is great if I'm single-clicking into the cell, but in this case, it feels glitchy to me)
| bug,notebook-markdown | low | Minor |
2,609,446,780 | youtube-dl | [Vimeo] HTTP Error 406: Not Acceptable when downloading video | ## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2021.12.17** (Also tried latest git)
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
```
$ youtube-dl -v -x 'https://vimeo.com/9010456'
[debug] System config: []
[debug] User config: ['--socket-timeout=2']
[debug] Custom config: []
[debug] Command-line args: ['-v', '-x', 'https://vimeo.com/9010456']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] youtube-dl version 2021.12.17
[debug] Git HEAD: a70dc03d2
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.0-9-generic-x86_64-with-glibc2.40 - OpenSSL 3.3.1 4 Jun 2024 - glibc 2.40
[debug] exe versions: ffmpeg 7.0.2, ffprobe 7.0.2, rtmpdump 2.4
[debug] Proxy map: {}
[vimeo] 9010456: Downloading webpage
[vimeo] 9010456: Downloading JSON metadata
[vimeo] 9010456: Downloading JSON metadata
ERROR: Unable to download JSON metadata: HTTP Error 406: Not Acceptable (caused by <HTTPError 406: 'Not Acceptable'>); please report this issue on https://github.com/ytdl-org/youtube-dl/issues , using the appropriate issue template. Make sure you are using the latest version; see https://github.com/ytdl-org/youtube-dl/#user-content-installation on how to update. Be sure to call youtube-dl with the --verbose option and include the complete output.
File "/home/ori/devel/youtube-dl/youtube_dl/extractor/common.py", line 679, in _request_webpage
return self._downloader.urlopen(url_or_request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ori/devel/youtube-dl/youtube_dl/YoutubeDL.py", line 2496, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 521, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 630, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 559, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 492, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 639, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
```
## Description
I think Vimeo added some protections, and youtube-dl needs to adapt. For a while it returned 403 Forbidden and 429 Too Many Requests, and now it's returning 406.
I don't think it's related to #26421 from 2020, as, IIRC, I managed to download from Vimeo between then and now.
`yt-dlp` successfully downloaded the video, and they have made changes to the Vimeo extractor in the past year, particularly adding browser impersonation. (See https://github.com/yt-dlp/yt-dlp/issues/10325) | broken-IE | low | Critical |
2,609,457,230 | node | API for temporarily supressing experimental warnings | Spinned off from https://github.com/nodejs/node/issues/55417 to discuss about this more generally.
While it can be abused and lead to unwanted issues, I think CLI tools tend to inevitably want to do it for a cleaner output and will find their way to do it already. One example that I've found previously was how Yarn does it by temporarily monkey-patching `process.emit`
https://github.com/yarnpkg/berry/blob/031b5da1dc8e459e844efda137b2f00d7cdc9dda/packages/yarnpkg-pnp/sources/loader/applyPatch.ts#L304-L325
So I think we might as well provide a proper API for this instead of having users resort to monkey-patching.
We could either just have something quick and simple like `process.noExperimentalWarning` toggle that gets checked before the warning is emitted (similar to `process.noDeprecation`):
```js
// Temporarily turn off the warning
process.noExperimentalWarning = true;
// ...use the experimental API
// process.noExperimentalWarning = false;
```
Or some API that toggles specific warnings (this requires assigning code to experiments, which we currently do not have):
```js
process.silenceExperimentalWarning && process.silenceExperimentalWarning('EXP1', true);
// ...use the experimental API
process.silenceExperimentalWarning && process.silenceExperimentalWarnings('EXP1', false);
```
Or some API that takes a function and run it without emitting warnings (may be a bit awkward for async APIs, but ensures that users don't forget to toggle it back):
```js
process.runWithoutExperimentalWarning(() => {
// ...use the experimental API
}, 'EXP1'); // if the experiment code is not passed, silence all experimental warnings
```
For the ones that take codes we could also merge it with deprecation warning handling and just have e.g. `process.silenceWarning()`/`process.runWithoutWarning()` that also accepts `DEP` codes.
More ideas are welcomed, too. | feature request,experimental | low | Minor |
2,609,476,969 | ui | [bug]: Cannot install npx shadcn@latest add sidebar-07 | ### Describe the bug

### Affected component/components
sidebar
### How to reproduce
npx shadcn@latest add sidebar-07
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
windowns 11
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,609,502,122 | electron | Session.clearData restores Network Persistent State on shutdown | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.1.2
### What operating system(s) are you using?
macOS
### Operating System Version
macOS 14.7
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
Calling `session.clearData()` should clear data found in `~/Library/Application Support/AppName/Network Persistent State`
### Actual Behavior
`session.clearData()` clears data found in `~/Library/Application Support/AppName/Network Persistent State`
However, after application shutdown, the data is restored the persisted file.
### Testcase Gist URL
https://gist.github.com/samuelmaddock/64c050c97f751be9452c415ffcc052c8
### Additional Information
[BrowsingDataRemoverImpl](https://source.chromium.org/chromium/chromium/src/+/main:content/browser/browsing_data/browsing_data_remover_impl.h;drc=08efa89c7d73d72e6ebcddbf053c41a230dc1ba8;l=36)::[RemoveImpl](https://source.chromium.org/chromium/chromium/src/+/main:content/browser/browsing_data/browsing_data_remover_impl.cc;drc=7fa0c25da15ae39bbd2fd720832ec4df4fee705a;bpv=1;bpt=1;l=308?gsn=RemoveImpl&gs=KYTHE%3A%2F%2Fkythe%3A%2F%2Fchromium.googlesource.com%2Fcodesearch%2Fchromium%2Fsrc%2F%2Fmain%3Flang%3Dc%252B%252B%3Fpath%3Dcontent%2Fbrowser%2Fbrowsing_data%2Fbrowsing_data_remover_impl.cc%23gfPb3p5eZhMrFKLhmSKZkl_W5c7fpQzTxn8w6xdi1-I) ends up calling [NetworkContext](https://source.chromium.org/chromium/chromium/src/+/main:services/network/network_context.cc;drc=08efa89c7d73d72e6ebcddbf053c41a230dc1ba8;bpv=1;bpt=1;l=1266?gsn=NetworkContext&gs=KYTHE%3A%2F%2FIqoGCp8Ba3l0aGU6Ly9jaHJvbWl1bS5nb29nbGVzb3VyY2UuY29tL2NvZGVzZWFyY2gvY2hyb21pdW0vc3JjLy9tYWluP2xhbmc9YyUyQiUyQj9wYXRoPXNlcnZpY2VzL25ldHdvcmsvbmV0d29ya19jb250ZXh0LmgjTmV0d29ya0NvbnRleHQlM0FuZXR3b3JrJTIzYyUyM2cyenplQjl2QkRuCp8Ba3l0aGU6Ly9jaHJvbWl1bS5nb29nbGVzb3VyY2UuY29tL2NvZGVzZWFyY2gvY2hyb21pdW0vc3JjLy9tYWluP2xhbmc9YyUyQiUyQj9wYXRoPXNlcnZpY2VzL25ldHdvcmsvbmV0d29ya19jb250ZXh0LmgjTmV0d29ya0NvbnRleHQlM0FuZXR3b3JrJTIzYyUyM2pCbE9ROUxiUEhqCp8Ba3l0aGU6Ly9jaHJvbWl1bS5nb29nbGVzb3VyY2UuY29tL2NvZGVzZWFyY2gvY2hyb21pdW0vc3JjLy9tYWluP2xhbmc9YyUyQiUyQj9wYXRoPXNlcnZpY2VzL25ldHdvcmsvbmV0d29ya19jb250ZXh0LmgjTmV0d29ya0NvbnRleHQlM0FuZXR3b3JrJTIzYyUyM2poaWVsZmNvX3g0Cp8Ba3l0aGU6Ly9jaHJvbWl1bS5nb29nbGVzb3VyY2UuY29tL2NvZGVzZWFyY2gvY2hyb21pdW0vc3JjLy9tYWluP2xhbmc9YyUyQiUyQj9wYXRoPXNlcnZpY2VzL25ldHdvcmsvbmV0d29ya19jb250ZXh0LmgjTmV0d29ya0NvbnRleHQlM0FuZXR3b3JrJTIzYyUyM2p0aWk5TVZIM3EzCp8Ba3l0aGU6Ly9jaHJvbWl1bS5nb29nbGVzb3VyY2UuY29tL2NvZGVzZWFyY2gvY2hyb21pdW0vc3JjLy9tYWluP2xhbmc9YyUyQiUyQj9wYXRoPXNlcnZpY2VzL25ldHdvcmsvbmV0d29ya19jb250ZXh0LmgjTmV0d29ya0NvbnRleHQlM0FuZXR3b3JrJTIzYyUyM21helFTeXJyMFNQ)::[ClearNetworkingHistoryBetween](https://source.chromium.org/chromium/chromium/src/+/main:services/network/network_context.cc;drc=08efa89c7d73d72e6ebcddbf053c41a230dc1ba8;bpv=1;bpt=1;l=1266?gsn=ClearNetworkingHistoryBetween&gs=KYTHE%3A%2F%2Fkythe%3A%2F%2Fchromium.googlesource.com%2Fcodesearch%2Fchromium%2Fsrc%2F%2Fmain%3Flang%3Dc%252B%252B%3Fpath%3Dservices%2Fnetwork%2Fnetwork_context.cc%23l8V7NAiG2pvvDclz5TLHWZo17GzTDfV3ZyQW-FwR84o) which for some reason isn't invoked until shutdown. | platform/macOS,bug :beetle:,status/confirmed,has-repro-gist,32-x-y,33-x-y | low | Critical |
2,609,527,462 | vscode | `"editor.formatOnSaveMode": "modificationsIfAvailable"` often broken | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Not applicable : Extensions needed
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.94.2
Commit: 384ff7382de624fb94dbaf6da11977bba1ecd427
Date: 2024-10-09T16:08:44.566Z
Browser: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Code/1.94.2 Chrome/124.0.6367.243 Electron/30.5.1 Safari/537.36
See https://github.com/microsoft/vscode/issues/203463
`"editor.formatOnSaveMode": "modificationsIfAvailable"` is often broken, even though its description states 👍 > Will attempt to format modifications only
(requires source control). If source control can't be
used, then the whole file will be formatted.
Couldn't it fallback to file also when `modificationsIfAvailable` fail ? | feature-request,formatting | low | Critical |
2,609,552,902 | pytorch | [PP] Add unit tests to check for memory regressions | We ran into some memory issues which were fixed in this stack: https://github.com/pytorch/pytorch/pull/138119, https://github.com/pytorch/pytorch/pull/138504
As a follow up we need to add unit tests to prevent regressions in pipelining memory usage. There are currently no tests that test pipelining schedules intra-memory allocation, but an idea is to do something similar to FSDP unit tests: https://github.com/pytorch/pytorch/pull/138119#issuecomment-2418036449 | triaged,module: pipelining | low | Minor |
2,609,562,835 | next.js | Next.js 15 stable codemod has now caused my local font imports to produce hydration errors | I've updated the repo, simplified it, and created easy steps to reproduce this issue based off @timneutkens feedback. I'm editing the original comment here below to reflect all that.
### Link to the code that reproduces this issue
https://github.com/tr1s/trisanity-test
### To Reproduce
1. `git clone https://github.com/tr1s/trisanity-test.git`
1. `cd trisanity-test`
1. `npm i`
1. `npm run dev`
The hydration errors:

### Current vs. Expected behavior
Current behaviour: a working app with hydration errors
Expected behaviour: a working app without hydration errors
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:39:07 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6000
Available memory (MB): 65536
Available CPU cores: 10
Binaries:
Node: 20.17.0
npm: 10.9.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.1 // Latest available version is detected (15.0.1).
eslint-config-next: 15.0.1
react: 19.0.0-rc-69d4b800-20241021
react-dom: 19.0.0-rc-69d4b800-20241021
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Font (next/font)
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I ran the Next.js 15 codemod and now I have hydration errors on my local font imports. I've tried with and without turbopack, same issues.
There are two ways to remove the hydration error:
1. Remove the default `src/app/layout.tsx`
1. Go to the `src/app/(frontend)/layout.tsx` file and remove the `className` block from the `<html> tag:
```
return (
<html
lang="en"
className={`
${GTFAdieu.variable}
${GTFAdieu_Slanted.variable}
${GTFAdieu_Backslant.variable}
${Source_Sans_3.variable}
`}
>
<body>
<Nav />
<main role="main" id="main-content" tabIndex={-1}>
{children}
</main>
</body>
</html>
);
```
The `src/app/layout.tsx` was left as is so it wouldn't mess up any Sanity Studio UI. Then I created a `src/app/(frontend)/layout.tsx` where the layout for my frontend would live—this is where I imported my fonts. I was following along with [this Sanity tutorial](https://www.sanity.io/learn/course/content-driven-web-application-foundations) in which they did this.
For this setup when using a default layout and a nested (frontend) layout I was not getting any hydration errors until Next.js RC2 and beyond. RC1 was working fine, and that's what this tutorial was based off.
Here's my `fonts.js` that was working previously with Next 15 RC1:
```
import localFont from 'next/font/local';
export const GTFAdieu = localFont({
variable: '--font-GTFAdieu',
src: [
{
path: '../../public/fonts/GTFAdieuTRIAL-Light.otf',
weight: '300',
style: 'normal',
},
{
path: '../../public/fonts/GTFAdieuTRIAL-Regular.otf',
weight: '400',
style: 'normal',
},
{
path: '../../public/fonts/GTFAdieuTRIAL-Bold.otf',
weight: '700',
style: 'normal',
},
],
});
export const GTFAdieu_Backslant = localFont({
variable: '--font-GTFAdieu-Backslant',
src: [
{
path: '../../public/fonts/GTFAdieuTRIAL-LightBackslant.otf',
weight: '300',
style: 'italic',
},
{
path: '../../public/fonts/GTFAdieuTRIAL-RegularBackslant.otf',
weight: '400',
style: 'italic',
},
{
path: '../../public/fonts/GTFAdieuTRIAL-BoldBackslant.otf',
weight: '700',
style: 'italic',
},
],
});
export const GTFAdieu_Slanted = localFont({
variable: '--font-GTFAdieu-Slanted',
src: [
{
path: '../../public/fonts/GTFAdieuTRIAL-RegularSlanted.otf',
weight: '400',
style: 'italic',
},
{
path: '../../public/fonts/GTFAdieuTRIAL-BoldSlanted.otf',
weight: '700',
style: 'italic',
},
],
});
export const Source_Sans_3 = localFont({
variable: '--font-Source-Sans-3',
src: [
{
path: '../../public/fonts/source-sans-3-v4-latin-regular.woff2',
weight: '400',
style: 'normal',
},
{
path: '../../public/fonts/source-sans-3-v4-latin-700.woff2',
weight: '700',
style: 'normal',
},
],
});
```
That's about all the context I can provide. Given that, do we know what may be going on here? | bug,Font (next/font) | medium | Critical |
2,609,582,587 | pytorch | `maybe_mark_dynamic` causes max recursion error when used with compile during tensordict consolidation | ## Context
We would like to consolidate tensordict made of NJTs in a single storage.
This should contain all the values, offsets and lengths of all the leaves in the TD. The operation looks like this
```python
swaps = copy(tensors)
for i, t in enumerate(tensors):
swaps[i] = tensor.view(torch.uint8).view(-1)
torch.stack(swaps, storage)
out = storage.split(tensor_lengths)
out = [t.view(t_src.dtype).view(t_src.shape) for t, t_src in zip(out, tensors)]
```
This operation can be hugely CPU-overhead bound and when compiled we can get a nice 2x speed-up. It should work with dynamic shapes (for torchrec users for instance you may know how many jagged tensors you'll get ahead of time but not how big they are going to be).
## Issue
Currently, doing that when the number of NJTs exceeds 64 or so causes a max recursion depth error.
The line responsible for this is this one:
https://github.com/pytorch/pytorch/blob/cd9c6e9408dd79175712223895eed36dbdc84f84/torch/nested/_internal/nested_tensor.py#L145C9-L145C41
I created a minimal reproducible example for this that only requires PT to be installed:
https://gist.github.com/vmoens/c15e6a6862ad849e30856034cbf33f1a
If you comment out the line 8:
```python
torch._dynamo.maybe_mark_dynamic(values, 0)
```
then the code runs fine (although compile time can be exceedingly long).
With this line uncommented, you should get:
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_method view(*(FakeTensor(..., size=(4*s141,), dtype=torch.uint8), torch.float32), **{}):
maximum recursion depth exceeded while calling a Python object
from user code:
File "/Users/vmoens/Library/Application Support/JetBrains/PyCharm2023.3/scratches/scratch_8.py", line 57, in consolidate
result = [
File "/Users/vmoens/Library/Application Support/JetBrains/PyCharm2023.3/scratches/scratch_8.py", line 58, in <listcomp>
view_old_as_new(v, oldv)
File "/Users/vmoens/Library/Application Support/JetBrains/PyCharm2023.3/scratches/scratch_8.py", line 52, in view_old_as_new
v = v.view(oldv.dtype)
```
cc @ezyang @chauhang @penguinwu @eellison @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @zou3519 @bdhirsh @yf225 @jbschlosser @anijain2305
| triaged,oncall: pt2,module: fakeTensor,module: dynamic shapes,module: dynamo,module: pt2-dispatcher | low | Critical |
2,609,634,390 | go | x/tools/cmd/splitdwarf/internal/macho: unrecognized failures | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/cmd/splitdwarf/internal/macho" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8733272192998882241)):
FAIL golang.org/x/tools/cmd/splitdwarf/internal/macho [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,Tools | low | Critical |
2,609,637,563 | yt-dlp | Regex in match-filters doesn't work | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Hi yt-dlp team!
I want to download videos from YT only from playlists with specific titles. For this I want to use regex in `--match-filters`. But regex don't work as I expect. I try different distro (now I use Fedora 39), yt-dlp from github and from fedora repos. And I have no idea, why it's doesn't work.
I show all playlists on the channel and I see two playlists `testplaylist` and `test playlist` (I use spaces for regex tests on purpose)
```
[debug] Command-line config: ['-vU', '--print', 'playlist_title', 'https://www.youtube.com/@maxmuller8233/playlists']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (zip)
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.3-100.fc39.x86_64-x86_64-with-glibc2.38 (OpenSSL 3.1.4 24 Oct 2023, glibc 2.38)
[debug] exe versions: ffmpeg 6.1.2 (setts), ffprobe 6.1.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.46.0, requests-2.28.2, sqlite3-3.42.0, urllib3-1.26.20, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.22 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://www.youtube.com/@maxmuller8233/playlists
[youtube:tab] @maxmuller8233/playlists: Downloading webpage
[debug] [youtube:tab] Selected tab: 'playlists' (playlists), Requested tab: 'playlists'
[download] Downloading playlist: Max Muller - Playlists
[youtube:tab] Playlist Max Muller - Playlists: Downloading 2 items of 2
[download] Downloading item 1 of 2
[youtube:tab] Extracting URL: https://www.youtube.com/playlist?list=PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY: Downloading webpage
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY: Redownloading playlist API JSON with unavailable videos
[download] Downloading playlist: test playlist
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (1/3)...
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (2/3)...
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (3/3)...
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Giving up after 3 retries
[youtube:tab] Playlist test playlist: Downloading 1 items of 1
[download] Downloading item 1 of 1
[youtube] Extracting URL: https://www.youtube.com/watch?v=kCnQ1Y6RZ-o
[youtube] kCnQ1Y6RZ-o: Downloading webpage
[youtube] kCnQ1Y6RZ-o: Downloading ios player API JSON
[youtube] kCnQ1Y6RZ-o: Downloading mweb player API JSON
[youtube] kCnQ1Y6RZ-o: Downloading player fb725ac8
[debug] Saving youtube-nsig.fb725ac8 to cache
[debug] [youtube] Decrypted nsig PlLe52Si6ctlik7g => WbSF-0g35geFgg
[debug] Loading youtube-nsig.fb725ac8 from cache
[debug] [youtube] Decrypted nsig kR6EC9SUBvXn1Ml6 => rIZHKeSKhKlk6g
[youtube] kCnQ1Y6RZ-o: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec:vp9.2, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec:vp9.2(10), channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] kCnQ1Y6RZ-o: Downloading 1 format(s): 136+251
test playlist
[download] Finished downloading playlist: test playlist
[download] Downloading item 2 of 2
[youtube:tab] Extracting URL: https://www.youtube.com/playlist?list=PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3
[youtube:tab] PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3: Downloading webpage
[youtube:tab] PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3: Redownloading playlist API JSON with unavailable videos
[download] Downloading playlist: testplaylist
[youtube:tab] PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3 page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (1/3)...
[youtube:tab] PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3 page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (2/3)...
[youtube:tab] PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3 page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (3/3)...
[youtube:tab] PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3 page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Giving up after 3 retries
[youtube:tab] Playlist testplaylist: Downloading 1 items of 1
[download] Downloading item 1 of 1
[youtube] Extracting URL: https://www.youtube.com/watch?v=kCnQ1Y6RZ-o
[youtube] kCnQ1Y6RZ-o: Downloading webpage
[youtube] kCnQ1Y6RZ-o: Downloading ios player API JSON
[youtube] kCnQ1Y6RZ-o: Downloading mweb player API JSON
[debug] Loading youtube-nsig.fb725ac8 from cache
[debug] [youtube] Decrypted nsig JZ6kSVM8Kkv0qWQG => bNPk226_r1ucuQ
[debug] Loading youtube-nsig.fb725ac8 from cache
[debug] [youtube] Decrypted nsig kmmmMYSZqq7tXNU7 => BmEtcCXsi5OKKA
[youtube] kCnQ1Y6RZ-o: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec:vp9.2, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec:vp9.2(10), channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] kCnQ1Y6RZ-o: Downloading 1 format(s): 136+251
testplaylist
[download] Finished downloading playlist: testplaylist
[download] Finished downloading playlist: Max Muller - Playlists
```
### Another examples
I want print only playlist with name `testplaylist`. But I get nothing
```
[debug] Command-line config: ['-vU', '--print', 'playlist_title', '--match-filters', "playlist_title~='testplaylist'", 'https://www.youtube.com/@maxmuller8233/playlists']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (zip)
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.3-100.fc39.x86_64-x86_64-with-glibc2.38 (OpenSSL 3.1.4 24 Oct 2023, glibc 2.38)
[debug] exe versions: ffmpeg 6.1.2 (setts), ffprobe 6.1.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.46.0, requests-2.28.2, sqlite3-3.42.0, urllib3-1.26.20, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.22 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://www.youtube.com/@maxmuller8233/playlists
[youtube:tab] @maxmuller8233/playlists: Downloading webpage
[debug] [youtube:tab] Selected tab: 'playlists' (playlists), Requested tab: 'playlists'
[download] entry does not pass filter (playlist_title~='testplaylist'), skipping ..
```
I try another regex, but I get same result
```
[debug] Command-line config: ['-vU', '--print', 'playlist_title', '--match-filters', "playlist_title~='.*test.*'", 'https://www.youtube.com/@maxmuller8233/playlists']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (zip)
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.3-100.fc39.x86_64-x86_64-with-glibc2.38 (OpenSSL 3.1.4 24 Oct 2023, glibc 2.38)
[debug] exe versions: ffmpeg 6.1.2 (setts), ffprobe 6.1.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.46.0, requests-2.28.2, sqlite3-3.42.0, urllib3-1.26.20, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.22 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://www.youtube.com/@maxmuller8233/playlists
[youtube:tab] @maxmuller8233/playlists: Downloading webpage
[debug] [youtube:tab] Selected tab: 'playlists' (playlists), Requested tab: 'playlists'
[download] entry does not pass filter (playlist_title~='.*test.*'), skipping ..
```
OK, let's try don't use regex and use strict compliance
```
[debug] Command-line config: ['-vU', '--print', 'playlist_title', '--match-filters', "playlist_title='testplaylist'", 'https://www.youtube.com/@maxmuller8233/playlists']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (zip)
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.3-100.fc39.x86_64-x86_64-with-glibc2.38 (OpenSSL 3.1.4 24 Oct 2023, glibc 2.38)
[debug] exe versions: ffmpeg 6.1.2 (setts), ffprobe 6.1.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.46.0, requests-2.28.2, sqlite3-3.42.0, urllib3-1.26.20, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.22 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://www.youtube.com/@maxmuller8233/playlists
[youtube:tab] @maxmuller8233/playlists: Downloading webpage
[debug] [youtube:tab] Selected tab: 'playlists' (playlists), Requested tab: 'playlists'
[download] entry does not pass filter (playlist_title='testplaylist'), skipping ..
```
### Working example
BUT! If I add negation of expression it's work! yt-dlp skip playlist with name `testplaylist` and show me playlist with name `test playlist`.
```
[debug] Command-line config: ['-vU', '--print', 'playlist_title', '--match-filters', "playlist_title!='testplaylist'", 'https://www.youtube.com/@maxmuller8233/playlists']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (zip)
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.3-100.fc39.x86_64-x86_64-with-glibc2.38 (OpenSSL 3.1.4 24 Oct 2023, glibc 2.38)
[debug] exe versions: ffmpeg 6.1.2 (setts), ffprobe 6.1.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.46.0, requests-2.28.2, sqlite3-3.42.0, urllib3-1.26.20, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.22 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://www.youtube.com/@maxmuller8233/playlists
[youtube:tab] @maxmuller8233/playlists: Downloading webpage
[debug] [youtube:tab] Selected tab: 'playlists' (playlists), Requested tab: 'playlists'
[download] Downloading playlist: Max Muller - Playlists
[youtube:tab] Playlist Max Muller - Playlists: Downloading 2 items of 2
[download] Downloading item 1 of 2
[youtube:tab] Extracting URL: https://www.youtube.com/playlist?list=PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY: Downloading webpage
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY: Redownloading playlist API JSON with unavailable videos
[download] Downloading playlist: test playlist
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (1/3)...
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (2/3)...
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (3/3)...
[youtube:tab] PLItuMrVtCQt2IsZrfsOq2wKA3QHQyDBsY page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Giving up after 3 retries
[youtube:tab] Playlist test playlist: Downloading 1 items of 1
[download] Downloading item 1 of 1
[youtube] Extracting URL: https://www.youtube.com/watch?v=kCnQ1Y6RZ-o
[youtube] kCnQ1Y6RZ-o: Downloading webpage
[youtube] kCnQ1Y6RZ-o: Downloading ios player API JSON
[youtube] kCnQ1Y6RZ-o: Downloading mweb player API JSON
[debug] Loading youtube-nsig.a62d836d from cache
[debug] [youtube] Decrypted nsig Po-B41SuepJBdEQlr => 6sd2_4Xe9NBoGg
[debug] Loading youtube-nsig.a62d836d from cache
[debug] [youtube] Decrypted nsig Blli67YoiQhku_B9L => sAD6oAlo3FERxw
[youtube] kCnQ1Y6RZ-o: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec:vp9.2, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec:vp9.2(10), channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] kCnQ1Y6RZ-o: Downloading 1 format(s): 136+251
test playlist
[download] Finished downloading playlist: test playlist
[download] Downloading item 2 of 2
[youtube:tab] Extracting URL: https://www.youtube.com/playlist?list=PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3
[youtube:tab] PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3: Downloading webpage
[youtube:tab] PLItuMrVtCQt2tknrMCQLSeS3NZr_R0yf3: Redownloading playlist API JSON with unavailable videos
[download] entry does not pass filter (playlist_title!='testplaylist'), skipping ..
[download] Finished downloading playlist: Max Muller - Playlists
```
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--print', 'playlist_title', '--match-filters', "playlist_title~='.*test.*'", 'https://www.youtube.com/@maxmuller8233/playlists']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (zip)
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.3-100.fc39.x86_64-x86_64-with-glibc2.38 (OpenSSL 3.1.4 24 Oct 2023, glibc 2.38)
[debug] exe versions: ffmpeg 6.1.2 (setts), ffprobe 6.1.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.46.0, requests-2.28.2, sqlite3-3.42.0, urllib3-1.26.20, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.22 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://www.youtube.com/@maxmuller8233/playlists
[youtube:tab] @maxmuller8233/playlists: Downloading webpage
[debug] [youtube:tab] Selected tab: 'playlists' (playlists), Requested tab: 'playlists'
[download] entry does not pass filter (playlist_title~='.*test.*'), skipping ..
```
| bug,triage | low | Critical |
2,609,667,828 | ui | [bug]: Typo in Sidebar documentartion | ### Describe the bug
I am almost sure there is a typo in the Sidebar documentation:
<img width="748" alt="Screenshot 2024-10-23 at 16 21 01" src="https://github.com/user-attachments/assets/e460f94f-ebc7-4264-b37f-5ff72309abd3">
Link: https://ui.shadcn.com/docs/components/sidebar#width
### Affected component/components
Sidebar
### How to reproduce
Go to https://ui.shadcn.com/docs/components/sidebar#width
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
macOS, Arc.
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,609,678,657 | vscode | Case mismatch in setBreakpoints requests in a single session | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.2
- OS Version: Windows 11
**Issue:**
In some cases, the `setBreakpoints` requests sent in the same session sends the path with different cases. Especially on case-insensitive file systems. This causes broken breakpoint experience on some debugger. Currently tested with [CodeLLDB](https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb) and [android-debug](https://marketplace.visualstudio.com/items?itemName=nisargjhaveri.android-debug) on Windows. Issue is fairly easy to repro and affects across machines and projects.
**Debugging details:**
After some debugging, here is what I gathered.
The issue boils down to the two different paths taken by `getRawSource`. For a local file, if `this.sources` contains an entry for that URI, it directly returns `source.raw`. Otherwise, it returns name and path using canonical URI.
https://github.com/microsoft/vscode/blob/45a9f8902795ca9a10f411db90f690f67843d88d/src/vs/workbench/contrib/debug/browser/debugSession.ts#L1507-L1515
Form what I saw `this.sources` is populated when resolving frames on a breakpoint hit. The debugger sends frames with raw source info. That info is stored and used as is once available.
The issue arises when the canonical path and the raw source from debugger doesn't match. e.g. canonical path is `<path>/Native-Lib.cpp` but the debugger reported `<path>/native-lib.cpp`. This means that until the `this.sources` was populated for this Uri, we sent `<path>/Native-Lib.cpp` when setting breakpoints. And later, we send `<path>/native-lib.cpp`. This mismatch causes lldb to treat the paths differently and duplicate breakpoints.
**Potential solution:**
- Instead of using the sources.raw as is as reported by the debugger, should we always use canonical path when setting breakpoints?
- Anything else can be done to prevent this?
| info-needed | low | Critical |
2,609,746,093 | flutter | [display_list] high overhead on DL paint conversion for very long picture scrolling. | Low priority, as I think this is an edge case. A large chunk of CPU time on the Ui thread is spent ferrying Paint data. Seems like we could probably make this faster?
See: https://gist.github.com/jonahwilliams/ab8ae105a8950635502434434b891bd9 | P3,team-engine,triaged-engine | low | Minor |
2,609,787,672 | material-ui | [Select] calling `.focus()` on the ref does not focus the component | ### Steps to reproduce
https://mui.com/material-ui/react-select/
document.getElementById('demo-simple-select').focus()
### Current behavior
If you call .focus() on any of the elments that are rendered by MuiSelect or on the refs element it will not focus the select or it will only visually focus it, but pressing space to open the menu will not work (and instead scrolls the page down).
### Expected behavior
That I can call .focus on the ref or an element to focus the select or that **there is an alternative way documented / implemented**
There is nothing document about this behavior in the docs.
### Context
Having a way to programmatically focus an select
### Your environment
Mui@6, Chrome
**Search keywords**: MuiSelect | component: select | low | Minor |
2,609,802,905 | pytorch | Cleanup the scaling logic in runtime.triton_heuristics.triton_config | ### 🐛 Describe the bug
In runtime.triton_heuristics.triton_config , we will scale the passed in XBLOCK, YBLOCK, ZBLOCK value according to various rules
- cap them with xnumel, ynumel, znumel
- scaling block size up when numel are large
- scaling XBLOCK up when the `min_elem_per_thread` requries so
- etc.
These scaling rules may cause issues. Here is one example: https://github.com/pytorch/pytorch/pull/138730
We should find time to clean them up.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
.
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,609,807,950 | kubernetes | Tracking: archive cloud-provider-sample repository and remove references | This issue is being created to help track the archiving of the github.com/kubernetes/cloud-provider-sample repository. During discussion at the [23 October SIG Cloud Provider office hours](https://www.youtube.com/watch?v=aXFkqfRMqd0), we decided that we would like to move forward with archiving this repository to reduce confusion about external cloud controller manager development.
The cloud-provider-sample repository was created 6 years ago with the intention of containing a sample implementation of the external cloud controller manager. Since then, the repository has not received any updates and a new sample has been created in the cloud-provider repository (see https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/cloud-provider/sample)
As we would like to close the sample repository, and we do not feel it is appropriate to carry this issue on the cloud-provider repository, we are opening this issue to help us track the progress.
TODO
- [x] Update the cloud-provider-sample readme to indicate archive status (https://github.com/kubernetes/cloud-provider-sample/pull/15)
- [ ] Issue created on kubernetes/org to archive the repository
- [ ] Update readme in new sample directory to indicate relationship with staging repository
- [ ] Update readme in cloud-provider repository to highlight the sample location
- [ ] cloud-provider-sample archived
- [ ] Email sent to kubernetes-dev mailing list
/sig cloud-provider
/kind cleanup | kind/cleanup,sig/cloud-provider,triage/accepted | low | Minor |
2,609,860,581 | flutter | There should be clarity and guidance on to address disposal of objects in exceptional situations. | Exceptional code paths should be finalized properly and should not result in leaking objects.
Some flutter tests are opted out from leak tracking because of exceptions.
And it is not clear if it is valid opt out.
Leak tracker may be can detect if exception happened and skip flagging not disposed objects for valid stories.
Search for `// leaking by design because of exception` in code of flutter framework. | framework,P3,team-framework,triaged-framework,a: leak tracking | low | Minor |
2,609,884,472 | next.js | Dynamic segments which contain dashes seem to be buggy (vercel only?) | ### Link to the code that reproduces this issue
https://github.com/julianbenegas/nextjs-dynamic-segments-bug
### To Reproduce
1. Deploy the project with Vercel, OR try it out in my deployment: https://rewrite-bug.vercel.app
2. Navigate to /blog/some-slug (i'm just json stringifying the awaited params and rendering them)
3. See that `some-slug` is not what's shown, but rather `[doesnt-work]` encoded
### Current vs. Expected behavior
I expect the page to show `{"doesnt-work":"some-slug"}` but instead it shows `{"doesnt-work":"%5Bdoesnt-work%5D"}`
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:37:25 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6030
Available memory (MB): 36864
Available CPU cores: 12
Binaries:
Node: 20.12.2
npm: 10.5.0
Yarn: 4.1.1
pnpm: 8.14.1
Relevant Packages:
next: 15.0.1 // Latest available version is detected (15.0.1).
eslint-config-next: 15.0.1
react: 19.0.0-rc-69d4b800-20241021
react-dom: 19.0.0-rc-69d4b800-20241021
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
if i rename the folder to `[slug]` or anything that doesn't contain a dash, this seems to be resolved. So, the issue seems to be related to dashes within dynamic segments.
if i pre-generate static params, those will work; for example: `https://rewrite-bug.vercel.app/blog/pregenerated` | bug,Navigation | low | Critical |
2,609,902,914 | react | [Compiler Bug]: False positive calling WebGLContext.useProgram | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAgHgBwjALgAgBMEAzAQygBsCSoA7OXASwjvwFkBPAQU0wAoAlPmAAdNvihgEAURIkEjfkPwBeAHwjx+HfjiswBAOaU1RCHCgBbBHVwA6IwlwzKCG3YBCnAJKF+AORWnAC0cGR0AG5kYAGCjs4Awqy4GLiBAO4IAEYmAExxANzauvp0hvjBAAowEEYwZFZmJvZwMAhkqTV1DVZCxRI6LVII3fWN-NW141aCAzoAvgA0+ADaALpz4gviIAtAA
### Repro steps
Call `gl.useProgram` within an effect:
```ts
export default function MyApp() {
useEffect(() => {
const gl = document.getElementById('my-canvas').getContext('webgl2');
const myProgram = gl.createProgram();
gl.useProgram(myProgram); // this should not error; useProgram is not a React hook
}, []);
}
```
`WebGLRenderingContext.useProgram` is not a hook, and is part of a built-in Web API
### How often does this bug happen?
Every time
### What version of React are you using?
19.0.0-rc-65a56d0e-20241020
### What version of React Compiler are you using?
19.0.0-rc-65a56d0e-20241020 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,609,934,542 | pytorch | CI/CD: Figure out what to do with split build | ### 🐛 Describe the bug
If we don't plan to ship it to PyPI, it should be disabled.
Disabling workflows for now
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | module: ci,triaged | low | Critical |
2,609,936,034 | godot | Xcode iOS Export godot_path returning Invalid project path specified: "project_name", aborting. | ### Tested versions
Reproducible in 4.3.Stable.Official
### System information
macOS Sonoma, M2 Pro 16gb Ram
### Issue description
Followed the tutorial outlined here: https://docs.godotengine.org/en/stable/tutorials/export/exporting_for_ios.html and Build succeeds but then I get this error:
```
Invalid project path specified: "project_name", aborting.
```
### Steps to reproduce
Use Godot 4.3 Stable on MacOS Sonoma with Xcode 16
Followed the tutorial outlined here: https://docs.godotengine.org/en/stable/tutorials/export/exporting_for_ios.html.
In `Supporting Files`> `export_project.info-plist` added `godot_path` as key and the exact name of the referenced folder as the value in i.e. `project_name`.
Aware that the project name should different than the exported project file/folder
### Minimal reproduction project (MRP)
N/A | bug,platform:ios,needs testing,topic:export | medium | Critical |
2,609,948,794 | godot | Extreme jitter and drift in moving tilemap platform | ### Tested versions
- Reproducible in 4.3.stable
### System information
Debian GNU/Linux 12.7
### Issue description
A CharacterBody2D over a TilemapLayer moved by a Path2D/PathFollow2D results in movement jitter. It also drifts from the platform if the scene is repeatedly paused/resumed.
I've used stretch mode viewport and snap transforms to pixel because I want to use it for a pixel perfect game but using stretch mode none and disabling snap transforms to pixel gives the same result.
### Steps to reproduce
Load the MRP and start it. Press repeatedly space or intro to pause/unpause the scene. The jitter will become drifting until the CharacterBody2D falls from the platform.
### Minimal reproduction project (MRP)
[testplatform.zip](https://github.com/user-attachments/files/17498701/testplatform.zip)
| bug,topic:physics,topic:2d | low | Minor |
2,609,950,022 | rust | Support for const string interpolation into inline assembly | Related to [this comment](https://github.com/rust-lang/rust/issues/128464#issuecomment-2417528415).
Maybe related to #93332
This feature request targets the inline assembly macro `asm!` and globally scope assembly `global_asm!` to support direct string interpolation into the assembly template.
The semantic works very much like a `format!` in a narrower sense, that only constant string is supported. The proposed macro word is `interpolate $expr` where `$expr` is a const-evaluatable expression that yields a `&'static str` constant value.
An example of how it would work is as follows.
```rust
trait Helper {
const SRC: &'static str;
}
fn make_it_work<H: Helper>(h: &H, x: i64) {
asm!(
"mov {0}, {1}",
in(reg) x,
interpolate H::SRC
);
}
struct H;
impl Helper for H {
const SRC: &'static str = "MAGIC";
}
fn invoke() {
make_it_work(&H, 42);
}
```
The one and only instantiation of `asm!` macro, when completely expanded through codegen, might have yield the following assembly line.
```
mov rcx, MAGIC
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"dingxiangfei2009"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-enhancement,A-inline-assembly,T-lang,T-compiler | low | Major |
2,609,970,098 | go | proposal: testing: ability to tell if test is running in parallel | ### Proposal Details
Unit tests can indicate their ability to be run in parallel by making the opt-in call `t.Parallel()`.
However sometimes a test might opt in but then mistakenly change some global state which could race with something else. In this case, the parallel test should actually be a serial one. In a large codebase it can be tricky to protect against this happening.
For example,
```go
func TestShouldNotBeParallel(t *testing.T) {
t.Parallel()
// ...
changeGlobalState(t) // a mistake, given above t.Parallel()
// ...
}
func changeGlobalState(t *testing.T) {
// proposal: could we check `t` here to determine that we're running in parallel when we shouldn't be, and error out?
if t.IsRunningInParallel() {
t.Fatal("changeGlobalState called from test with t.Parallel()")
}
// ...
}
```
The proposal here is a function like `t.IsRunningInParallel()` that could return true in the case that the current test is running in a parallel context. I don't **think** there's a way to check this currently? | Proposal | low | Critical |
2,609,990,399 | pytorch | Accuracy issues (NANs) in torch.sdpa backward on ROCm | ### 🐛 Describe the bug
On gfx942 GPUs, running SD3 ControlNet, we observe NANs in the gradients after a few hundred iterations.
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0+rocm6.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41133-dd7f95766
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.4 (main, Jun 8 2024, 18:29:57) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41133
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166,168,170,172,174,176,178,180,182,184,186,188,190
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127,129,131,133,135,137,139,141,143,145,147,149,151,153,155,157,159,161,163,165,167,169,171,173,175,177,179,181,183,185,187,189,191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] pytorch-triton-rocm==3.1.0
[pip3] torch==2.5.0+rocm6.2
[pip3] torchvision==0.20.0+rocm6.2
[conda] Could not collect
```
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | module: rocm,triaged,module: sdpa | low | Critical |
2,609,994,060 | PowerToys | Shift Key is blocked randomly | ### Microsoft PowerToys version
0.85.1
### Installation method
Scoop
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
For some reason, I can no longer use the Shift key normally when PowerToys is open. Please note that this does not happen right after PowerToys is up and running, but occurs after a while.
I cannot use any shortcuts that include the Shift key (like <kbd>Ctrl + Shift + T</kbd>), nor can I type uppercase letters via <kbd>Shift + \<Letter\></kbd>. However, pressing the Shift key five times in a row, will open up the Sticky Keys dialog, which means that the key isn't fully unresponsive, but is being rendered with limited functionality.
I have no re-mappings in the PowerToys' Keyboard Manager that includes the Shift key.
This has not occurred to me before v0.85.1. It was working fine until I upgraded from v0.83.0 to 0.85.1 via Scoop.
I came to the conclusion that it was PowerToys that was causing this unusual behaviour, because when I manually quit PowerToys, everything went back to normal and the Shift key was working fine.
I have no clue why this would happen. Perhaps some element of PowerToys is interfering with a Windows keyboard setting?
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Major |
2,610,000,680 | node | Fatal error on setting memory permissions (`Fatal error... Check failed: 12 == (*__errno_location ())`) | ### Version
v23.0.0
### Platform
```text
Linux ... 6.6.13-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Jan 20 18:03:28 UTC 2024 x86_64 GNU/Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
By repeatedly running Node, as simply installing dependencies of a project through the command `node $(which npm) install` in a while loop (see the script used for testing [here](https://gitlab.cern.ch/gdeblasi/node-v23-v8fatal/-/blob/wip-default-runners/.gitlab-ci.yml?ref_type=heads#L11)).
For instance:
```
RETRIES=100
COUNT=0
NPM=$(which npm)
while [ $COUNT -lt $RETRIES ]; do
node $NPM install
COUNT=$((COUNT + 1))
done
```
### How often does it reproduce? Is there a required condition?
Very often on specific platforms such as the one shown above.
### What is the expected behavior? Why is that the expected behavior?
Node should not fail and crash.
### What do you see instead?
At same point Node crashes giving the following error, extracted from [here](https://gitlab.cern.ch/gdeblasi/node-v23-v8fatal/-/jobs/45065031#L572):
```
# Fatal error in , line 0
# Check failed: 12 == (*__errno_location ()).
#
#
#
#FailureMessage Object: 0x7ffc54684900
----- Native stack trace -----
1: 0x107e621 [node]
2: 0x2aba423 V8_Fatal(char const*, ...) [node]
3: 0x2ac5066 v8::base::OS::SetPermissions(void*, unsigned long, v8::base::OS::MemoryPermission) [node]
4: 0x14c1bfc v8::internal::CodeRange::InitReservation(v8::PageAllocator*, unsigned long) [node]
5: 0x155982f v8::internal::Heap::SetUp(v8::internal::LocalHeap*) [node]
6: 0x149ac92 v8::internal::Isolate::Init(v8::internal::SnapshotData*, v8::internal::SnapshotData*, v8::internal::SnapshotData*, bool) [node]
7: 0x19ee994 v8::internal::Snapshot::Initialize(v8::internal::Isolate*) [node]
8: 0x1315af6 v8::Isolate::Initialize(v8::Isolate*, v8::Isolate::CreateParams const&) [node]
9: 0xed9a18 node::NewIsolate(v8::Isolate::CreateParams*, uv_loop_s*, node::MultiIsolatePlatform*, node::SnapshotData const*, node::IsolateSettings const&) [node]
10: 0x1043a6d node::NodeMainInstance::NodeMainInstance(node::SnapshotData const*, uv_loop_s*, node::MultiIsolatePlatform*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) [node]
11: 0xf95806 node::Start(int, char**) [node]
12: 0x7feb3293a24a [/lib/x86_64-linux-gnu/libc.so.6]
13: 0x7feb3293a305 __libc_start_main [/lib/x86_64-linux-gnu/libc.so.6]
14: 0xecff4e _start [node]
/scripts-195672-45065031/step_script: line 172: 884 Trace/breakpoint trap (core dumped) node $NPM install
```
### Additional information
Similar issue https://github.com/nodejs/help/issues/4465.
The output of `strace` can be found [here](https://gitlab.cern.ch/gdeblasi/node-v23-v8fatal/-/jobs/45065894/artifacts/raw/public/strace.log). | v8 engine | low | Critical |
2,610,021,282 | godot | Keys that may have lengths in the Animataion editor are not well displayed if the length is too short | ### Tested versions
- Reproducible in: 4.3.stable, v4.4.dev3
### System information
Debian Linux/X11 12.7, xfwm4 4.18.0 with built-in compositor, AMD RX 550, Compatibility renderer
### Issue description
when animating the property ``AudioStreamPlayer:playing``, the checkboxes are only visible when the value is set to ``false``. if the value is ``true``, the checkbox is invisible — and it cannot be clicked: it must be drag‑selected
> 
> Animation panel showing an animation with the tracks ``AudioStreamPlayer:playing`` and ``ColorRect:visible`` being animated identically.
> the checkboxes on the ``ColorRect:visible`` track look and act normally,
> only the checkboxes on the ``AudioStreamPlayer:playing`` track are bugged
### Steps to reproduce
1. create a new Godot project
2. create a scene with an AnimationPlayer and an AudioStreamPlayer
3. add a sample to the AudioStreamPlayer
4. animate the proprety ``AudioStreamPlayer:playing`` using the AnimationPlayer
### Minimal reproduction project (MRP)
[2024-10-24_music-animation-bug.zip](https://github.com/user-attachments/files/17499031/2024-10-24_music-animation-bug.zip) (5.4 KiB before extracting and opening in Godot, 4.1 MiB after)
(credit: sample is located under ``shapes/micro.wav`` in [LMMS](https://lmms.io) built‑in samples) | discussion,topic:editor,topic:animation | low | Critical |
2,610,038,053 | node | Coverage workflows fail on v22.x/v22.x-staging | Starting with Node.js 22.8.0 (https://github.com/nodejs/node/commit/78ee90e5d90f4cd3d1921bd96cbac829fee3d7f0) the "without intl" coverage workflow started failing due to ["does not meet the global threshold"](https://github.com/nodejs/node/actions/runs/10675930420/job/29588689452).
From Node.js 22.9.0 (https://github.com/nodejs/node/commit/4631be0311fbde7b77723757e15d025727399e63) both coverage workflows (with and without intl) fail due to the same reason (not meeting the global threshold). The situation persists through Node.js 22.10.0 and for the [proposal for Node.js 22.11.0](https://github.com/nodejs/node/pull/55504#issuecomment-2433757600). (I'm going to ignore this for Node.js 22.11.0 as the release is intentionally not including changes beyond the metadata updates to mark the release as LTS.)
AFAIK the coverage workflows are passing on Node.js 23 and `main` and were passing on Node.js 22.7.0 (https://github.com/nodejs/node/commit/65eff1eb19a6d8e17435bbc4147ac4535e81abb4), so the question is what has/hasn't landed on v22.x-staging to cause the discrepancy? If there isn't an easy way to get the workflow passing again, can we lower the threshold for Node.js 22 only, or even disable the coverage workflows for 22?
cc @nodejs/releasers | meta,release-agenda,coverage,v22.x | low | Major |
2,610,080,927 | pytorch | [funcol] functional collectives are 67% slower than torch.distributed collectives | ### 🐛 Describe the bug
Hi torch distributed team!
As we discussed in PTC, we found functional collectives are 34%~67% slower than c10d collectives due to the heavy CPU overhead.
To be specific, we benchmarked functional collectives (the big three: all_gather, reduce_scatter, all_reduce) in torch 2.4 and compared them with the c10d collectives. Here is the summary:
| collective | c10d time | funcol time |
|---------------|-----------|--------------|
| AllGather | 131us | 205us (156%) |
| ReduceScatter | 122us | 164us (134%) |
| AllReduce | 89us | 149us (167%) |
We believe the overhead of funcol comes from extra copy, extra wrapping, and extra aten ops due to the tracing requirement for torch.compile.
Here are the profiles:
- AllGather
#### torch.distributed.all_gather (totally 131us with creating empty tensor)

#### funcol.all_gather (205us)

- ReduceScatter
#### torch.distributed.reduce_scatter

#### funcol.reduce_scatter

- AllReduce
#### torch.distributed.all_reduce

#### funcol.all_reduce

We can reproduce the trace with the following code. Thanks : )
```
import os
import torch
import torch.distributed as dist
from torch.distributed.device_mesh import init_device_mesh
import torch.distributed._functional_collectives as funcol
world_size = int(os.environ["WORLD_SIZE"])
class TorchCollectiveProfile:
@property
def world_size(self):
return world_size
@property
def device_type(self):
return "cuda"
def profile_funcol_all_gather(self):
device_mesh = init_device_mesh(self.device_type, (self.world_size,))
tensor = torch.randn(128, 128).cuda()
torch.cuda.synchronize()
dist.barrier()
if device_mesh.get_rank() == 0:
with torch.profiler.profile(
schedule=torch.profiler.schedule(wait=5, warmup=10, active=5),
on_trace_ready=lambda p: p.export_chrome_trace(
"test_profile_funcol_all_gather_" + str(p.step_num) + ".json"
),
with_stack=True,
) as p:
for _ in range(20):
_ = funcol.all_gather_tensor(tensor, 1, device_mesh)
p.step()
else:
for _ in range(20):
_ = funcol.all_gather_tensor(tensor, 1, device_mesh)
dist.barrier()
torch.cuda.synchronize()
def profile_c10d_all_gather(self):
device_mesh = init_device_mesh(self.device_type, (self.world_size,))
tensor = torch.randn(128, 128).cuda()
def c10d_call():
output_tensor = torch.empty(128, 128 * self.world_size, dtype=tensor.dtype, device=tensor.device)
_ = dist.all_gather_into_tensor(output_tensor, tensor, device_mesh.get_group())
torch.cuda.synchronize()
dist.barrier()
if device_mesh.get_rank() == 0:
with torch.profiler.profile(
schedule=torch.profiler.schedule(wait=5, warmup=10, active=5),
on_trace_ready=lambda p: p.export_chrome_trace(
"test_profile_c10d_all_gather_" + str(p.step_num) + ".json"
),
with_stack=True,
) as p:
for _ in range(20):
c10d_call()
p.step()
else:
for _ in range(20):
c10d_call()
dist.barrier()
torch.cuda.synchronize()
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.15.120.bsk.2-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-40GB
GPU 1: NVIDIA A800-SXM4-40GB
GPU 2: NVIDIA A800-SXM4-40GB
GPU 3: NVIDIA A800-SXM4-40GB
GPU 4: NVIDIA A800-SXM4-40GB
GPU 5: NVIDIA A800-SXM4-40GB
GPU 6: NVIDIA A800-SXM4-40GB
GPU 7: NVIDIA A800-SXM4-40GB
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 120
On-line CPU(s) list: 0-119
Thread(s) per core: 2
Core(s) per socket: 30
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
Stepping: 6
CPU MHz: 2294.635
BogoMIPS: 4589.27
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.8 MiB
L1i cache: 1.9 MiB
L2 cache: 75 MiB
L3 cache: 108 MiB
NUMA node0 CPU(s): 0-59
NUMA node1 CPU(s): 60-119
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1
[pip3] torchvision==0.19.1
[pip3] triton==3.0.0
[conda] Could not collect
cc @msaroufim @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | module: performance,oncall: distributed,triaged | low | Critical |
2,610,081,870 | pytorch | [Inductor] Regression in test_comprehensive_nn_functional_max_pool2d_cuda from triton | ### 🐛 Describe the bug
```
root@3b81e04520e3:/workspace# python /opt/pytorch/pytorch/test/inductor/test_torchinductor_opinfo.py -v -k test_comprehensive_nn_functional_max_pool2d_cuda test_comprehensive_nn_functional_max_pool2d_cuda_float16 (__main__.TestInductorOpInfoCUDA.test_comprehensive_nn_functional_max_pool2d_cuda_float16) ... ERROR
test_comprehensive_nn_functional_max_pool2d_cuda_float32 (__main__.TestInductorOpInfoCUDA.test_comprehensive_nn_functional_max_pool2d_cuda_float32) ... ERROR
test_comprehensive_nn_functional_max_pool2d_cuda_float64 (__main__.TestInductorOpInfoCUDA.test_comprehensive_nn_functional_max_pool2d_cuda_float64) ... ERROR
...
Mismatched elements: 4 / 36 (11.1%)
Greatest absolute difference: 0.372314453125 at index (0, 0, 2, 5) (up to 1e-05 allowed)
Greatest relative difference: 1.0 at index (0, 0, 0, 5) (up to 0.001 allowed)
...
Mismatched elements: 4 / 36 (11.1%)
Greatest absolute difference: 0.37210461497306824 at index (0, 0, 2, 5) (up to 1.5e-05 allowed)
Greatest relative difference: 1.0 at index (0, 0, 0, 5) (up to 1.3e-05 allowed)
...
Mismatched elements: 4 / 36 (11.1%)
Greatest absolute difference: 0.37626347335627985 at index (0, 1, 0, 5) (up to 1e-07 allowed)
Greatest relative difference: 1.0 at index (0, 0, 1, 5) (up to 1e-07 allowed)
```
Regression to numerical mismatches in test_comprehensive_nn_functional_max_pool2d_cuda from https://github.com/pytorch/pytorch/issues/131072 with PyTorch built with the current pinned triton commit [cf34004b8a67d290a962da166f5aa2fc66751326](https://github.com/triton-lang/triton/commits/cf34004b8a67d290a962da166f5aa2fc66751326) from Sep 24. Manually rebuilding triton wheel with triton commit [72734f086b3a70a0399b7e9d21b83d5d8dc7e1d5](https://github.com/triton-lang/triton/commits/72734f086b3a70a0399b7e9d21b83d5d8dc7e1d5) from Jul 30 as mentioned in https://github.com/pytorch/pytorch/issues/131072#issuecomment-2261665136 fixes the issue.
cc @ezyang @chauhang @penguinwu @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov @eqy @nWEIdia
### Versions
torch: https://github.com/pytorch/pytorch/commit/df5bbc09d191fff3bdb592c184176e84669a7157
triton: https://github.com/triton-lang/triton/commits/cf34004b8a67d290a962da166f5aa2fc66751326 | triaged,oncall: pt2,upstream triton | low | Critical |
2,610,083,358 | next.js | Playwright could not find tests with Nextjs 15 experimental testmode | ### Link to the code that reproduces this issue
https://github.com/KagamiChan/next15-playwright-test-not-found
### To Reproduce
1. run `npm run build`
2. run `npx playwright test`
3. playwright reports "No tests found"
```
next15-playwright-test-not-found> npx playwright test
Error: No tests found
To open last HTML report run:
npx playwright show-report
```
### Current vs. Expected behavior
playwright should detect the test cases and run it.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 64618
Available CPU cores: 32
Binaries:
Node: 21.7.1
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.1 // Latest available version is detected (15.0.1).
eslint-config-next: 15.0.1
react: 19.0.0-rc-69d4b800-20241021
react-dom: 19.0.0-rc-69d4b800-20241021
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Testing
### Which stage(s) are affected? (Select all that apply)
next start (local)
### Additional context
I also tried with pnpm and the result is the same, so it might not be an issue of package manager
The setup is following the instructions here: https://github.com/vercel/next.js/blob/canary/packages/next/src/experimental/testmode/playwright/README.md
This used to work with v14 | bug,Testing | low | Critical |
2,610,086,610 | ui | [bug]: Data Table Sorting Highlight | ### Describe the bug
The data table sorting highlight does not go to line 29, when it should be. This has caused me issues when following the (excellent) data table guide.
### Affected component/components
Data Table
### How to reproduce
1. Go to [data table sorting](https://ui.shadcn.com/docs/components/data-table#sorting)
2. Navigate to the 29th line of the first step.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Mac, Arc
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,610,086,811 | deno | request: option to specify target system in `deno install` | Split from https://github.com/denoland/deno/issues/26180#issuecomment-2433670599.
Currently, `deno install` only downloads packages for the current system. This is good, as we would be wasting space (and bandwidth) downloading resources for systems that aren't in use. For instance, this way `esbuild` only pulls the binary for the system you're running, instead of 25 binaries to cover every possible system.
If you're cross-compiling for `deno compile`, however, then it would be nice to be able to `deno install` the packages for the target system.
To support that, we could add a `--target` option to `deno install`, mirroring `deno compile`. This would allow you to set the target system to cache packages for.
The only caveat I can think of is that it might interact weirdly with postinstall scripts. | feat,install | low | Minor |
2,610,101,375 | node | console.group and console.groupCollapsed should write to stderr | ### Version
all
### Platform
```text
all
```
### Subsystem
console
### What steps will reproduce the bug?
Given the program `console-group.js`
```js
console.group('group');
console.groupCollapsed('group collapsed');
```
run
```sh
node console-group.js 2>/dev/null
```
### How often does it reproduce? Is there a required condition?
consistently, all versions
### What is the expected behavior? Why is that the expected behavior?
The above command should produce no output. This prevents a diagnostic from inadvertently interleaving text in parsable stdout.
One could argue the same should apply to `console.log` and `console.info` and that all machine readable program output should be written to `process.stdout` explicitly, but that ship has sailed.
### What do you see instead?
Send group labels to stderr.
### Additional information
_No response_ | wontfix,console | low | Critical |
2,610,101,573 | vscode | Add line number interval setting | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
As described in #37120, I would like to request to add a line number interval setting.
User alexdima suggested that there was too little interest in the feature by looking at a number of vote on another feature which is the base feature ( that sticks the line number interval at 10: #36981). I believe this most probably understates the number of silent users that might find this hard-coded 10 unadjusted to their workspace preferences, but have better things to do than report the feature to change it. | feature-request | low | Major |
2,610,112,409 | flutter | [web] Remove Clipboard API fallback | The web engine has some logic to be able to copy-paste content from a Flutter app, which is using a now outdated/deprecated API ([`document.execCommand`](https://developer.mozilla.org/en-US/docs/Web/API/Document/execCommand)) as a [fallback](https://github.com/flutter/engine/blob/0b56cb8de79e28c4bb58e98129011ca7e684e12e/lib/web_ui/lib/src/engine/clipboard.dart#L225-L233) for browsers that didn't implement the standard API.
We only use the following methods from the standard Clipboard API:
* [writeText](https://developer.mozilla.org/en-US/docs/Web/API/Clipboard/writeText), to put content into the users' clipboard (widely available, since Safari 13.4 (~March 2020))
* [readText](https://developer.mozilla.org/en-US/docs/Web/API/Clipboard/readText), to read content from the users' clipboard (widely available, since Firefox 125 (~April 2024))
It seems that *soon* we'll be able to remove the `execCommand` fallback, and all the scaffolding to make it work.
See `ExecCommand*` strategy classes here:
* https://github.com/flutter/engine/blob/0b56cb8de79e28c4bb58e98129011ca7e684e12e/lib/web_ui/lib/src/engine/clipboard.dart
(PS: We should also remove the `execCommand` definition from the engine's `dom.dart`!)
---
Closes: https://github.com/flutter/flutter/issues/48581 | engine,platform-web,P2,c: tech-debt,team-web,triaged-web | low | Minor |
2,610,113,089 | langchain | DallEAPIWrapper not support base_url setting which supported by OpenAI sdk. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
OpenAI sdk support base_url:
```
client = OpenAI(base_url=URL)
response = client.images.generate(
```
but DallEAPIWrapper not support base_url.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
OpenAI sdk support base_url:
```
client = OpenAI(base_url=URL)
response = client.images.generate(
```
but DallEAPIWrapper not support base_url.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:35:10 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6031
> Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:54:21) [Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.3.12
> langchain: 0.3.4
> langchain_community: 0.3.3
> langsmith: 0.1.129
> langchain_anthropic: 0.2.3
> langchain_huggingface: 0.1.0
> langchain_ollama: 0.2.0
> langchain_openai: 0.2.3
> langchain_text_splitters: 0.3.0
> langgraph: 0.2.39
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> anthropic: 0.36.2
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.27.2
> huggingface-hub: 0.25.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.1
> langgraph-sdk: 0.1.33
> numpy: 1.26.4
> ollama: 0.3.3
> openai: 1.52.1
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> sentence-transformers: 3.2.1
> SQLAlchemy: 2.0.36
> tenacity: 8.5.0
> tiktoken: 0.8.0
> tokenizers: 0.20.1
> transformers: 4.45.2
> typing-extensions: 4.12.2 | stale | low | Critical |
2,610,133,522 | excalidraw | Setting the default save path in the settings | Setting the default save path in the settings.
Or selecting the path when saving and remembering it for future saves.
The current version saves only to the default Download folder mixed with other files. | enhancement | low | Minor |
2,610,182,352 | rust | Higher-ranked trait bounds in function signature incorrectly require bounds on the impl | ### Description:
When using higher-ranked trait bounds (HRTBs) in a method signature, the compiler throws an error suggesting that the bounds should be moved to the impl block, even though the function-specific bound should be sufficient.
### Expected behavior:
The function-specific bound should be enough, and the implementation should compile without moving the bound to the impl block.
### Actual behavior:
The compiler throws an error, requiring the bound to be placed on the impl block instead of just the method, even though the method-specific bound should work.
### Example code
```
impl <T:Base>Base for Wrapper<T>{
type E=T::E;
}
impl <T:Trait<U>,U:Base>Trait<U>for Wrapper<T>{
fn requires_ref_add_and_assign(&self,u:&U) where for<'a>U::E:AddAssign<<&'a T::E as Add<&'a U::E>>::Output>,for<'a>&'a T::E:Add<&'a U::E>{
}
}
struct Wrapper<T:Base>(T);
trait Base{
type E;
}
trait Trait<U:Base>:Base{
fn requires_ref_add_and_assign(&self,u:&U) where for<'a>U::E:AddAssign<<&'a Self::E as Add<&'a U::E>>::Output>,for<'a>&'a Self::E:Add<&'a U::E>;
}
use std::ops::{Add,AddAssign};
```
### Compiler output
```
error[E0277]: cannot add `&'a <U as Base>::E` to `&'a <T as Base>::E`
--> src/main.rs:12:2
|
12 | fn requires_add(&self,u:&U) where for<'a>U::E:AddAssign<<&'a T::E as Add<&'a U::E>>::Output>,for<'a>&'a T::E:Add<&'a U::E>{
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no implementation for `&'a <T as Base>::E + &'a <U as Base>::E`
|
= help: the trait `for<'a> Add<&'a <U as Base>::E>` is not implemented for `&'a <T as Base>::E`
help: consider introducing a `where` clause, but there might be an alternative better way to express this requirement
|
11 | impl <T:Trait<U>,U:Base>Trait<U>for Wrapper<T> where for<'a> &'a <T as Base>::E: Add<&'a <U as Base>::E>{
| +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
```
### Workaround:
Moving the bounds to the impl block works, but function-specific bounds should work without requiring this. Since I can't do that in my actual use case, I've made alternative operation traits.
### Meta
`rustc --version --verbose`:
```
rustc 1.80.1 (3f5fd8dd4 2024-08-06) (Arch Linux rust 1:1.80.1-1)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-unknown-linux-gnu
release: 1.80.1
LLVM version: 18.1.8
```
</p>
</details>
| A-trait-system,A-associated-items,C-bug,T-types,fixed-by-next-solver,A-higher-ranked | low | Critical |
2,610,189,617 | tauri | [feat] IOS/Android SafeArea control | ### Describe the problem
Hello everyone, I've encountered a problem while developing an iOS app. An annoying white titlebar appeared at the top of the screen and on both left and right on landscapre mode.

Most likely, the issue is related to this: https://developer.apple.com/documentation/swiftui/view/ignoressafearea(_:edges:)
If that's the case, it might be useful to implement a feature in Tauri that allows control over this parameter.
### Describe the solution you'd like
Now swiftUI offers a function to disable it, it would be nice to have something similar in Tauri.
```swift
struct ContentView: View {
var body: some View {
NavigationView {
ZStack {
LinearGradient(
colors: [.red, .yellow],
startPoint: .topLeading,
endPoint: .bottomTrailing
)
.ignoresSafeArea()
.navigationTitle("Hello World")
}
}
}
}
```
<div align="center">
| Before `.ignoresSafeArea()` | After `.ignoresSafeArea()` |
|-----------------------------|----------------------------|
| <img width="182" alt="Before" src="https://github.com/user-attachments/assets/9939dd43-9f8b-4b91-ac8c-83531930adb0"> | <img width="177" alt="After" src="https://github.com/user-attachments/assets/3fd00c60-e124-4e5e-b9fb-cee51d5ee475"> |
</div>
| type: feature request | low | Major |
2,610,225,575 | go | runtime:cpu2: TestGdbAutotmpTypes failures | ```
#!watchflakes
default <- pkg == "runtime:cpu2" && test == "TestGdbAutotmpTypes"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8733263226355225873)):
=== RUN TestGdbAutotmpTypes
=== PAUSE TestGdbAutotmpTypes
=== CONT TestGdbAutotmpTypes
runtime-gdb_test.go:79: gdb version 15.0
— [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,NeedsInvestigation,compiler/runtime | low | Critical |
2,610,259,901 | pytorch | Support nn.Module arguments for the function to torch.compiler.allow_in_graph | ### 🚀 The feature, motivation and pitch
It is very common that a nn.Module is passed as an argument to a function, such as the forward hooks registered to a module.
There are cases that the code of such function is not traceable by Dynamo but the graph is possible be captured correctly by AOT dispatcher. So we would like to decorate the function as torch.compiler.allow_in_graph.
But this is not allowed by the current design of allow_in_graph because it requires the inputs to fn must be Proxy-able types in the FX graph. Valid types include: Tensor/int/bool/float/None/List[Tensor?]/List[int?]/List[float?] Tuple[Tensor?, …]/Tuple[int?, …]/Tuple[float?, …]/torch.dtype/torch.device
nn.Moudle is not one of them. which makes the function is not possible to work as expected with allow_in_graph.
Decorated such function with allow_in_graph causes more graph breaks than avoiding them.
Below is an example showing such a case:
```
import torch
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x.mul(5.0)
def my_function(module, x):
x = x.mul(3.0)
print("Simulating something dynamo is not able to trace into.")
return module(x)
torch.compiler.allow_in_graph(my_function)
def fn(module, x):
x = torch.add(x, 1.0)
x = my_function(module, x)
x = torch.add(x, 2.0)
return x
fn = torch.compile(fn)
module = MyModule()
input = torch.ones(2, requires_grad = True)
output = fn(module, input)
output.sum().backward()
```
If we can enhance to allow nn.Module as the argument of the fn to allow_in_graph, it will greatly widen the use cases supported by allow_in_graph.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,oncall: pt2,module: dynamo | low | Minor |
2,610,270,948 | kubernetes | How to balance distribution of existing pods without restarting them | I now have 10 pods, but I thought I didn't plan them well at the beginning, which resulted in five of them being scheduled to one node. These pods are running wss services, so they do not want to be restarted. So is there any way to distribute these pods evenly without restarting the operation?
| sig/scheduling,kind/support,lifecycle/stale,needs-triage | low | Major |
2,610,273,420 | vscode | Allow relative paths in `typescript.tsserver.nodePath` | - VS Code Version: `1.94.2`
- OS Version: MacOS
Steps to Reproduce:
1. Update workspace settings JSON `typescript.tsserver.nodePath` to point at a relative path to the workspace root (e.g. `./my-node-script.sh`)
2. Notice that the script isn't used for invoking tsserver (typescript plugin falls back to using VSCode's inbuilt node install)
The problem occurs because `asAbsoluteWorkspacePath()` expects that the path to the node path is prefixed with the workspace name.
I'd expect this property to behave the same as `typescript.tsdk` which **doesn't** require the workspace name to prefixed to the relative path passed to it.
https://github.com/microsoft/vscode/blob/fe997185b5e6db94693ed6ef5456cfa4e8211edf/extensions/typescript-language-features/src/utils/relativePathResolver.ts#L9-L20
| feature-request,typescript | medium | Major |
2,610,298,125 | TypeScript | TypeScript skips type-checking despite deleted file in referenced project | ### Acknowledgement
- [x] I acknowledge that issues using this template may be closed without further explanation at the maintainer's discretion.
### Comment
We have encountered an unusual case in our project references setup. Below is a description of the current configuration. I believe there might be a gap in our setup or configuration that is causing this issue.
# Setup
- We have two projects: 'bar' and 'foo'
- 'foo' references 'bar'
- 'foo/src/index2.ts' consumes a function from "bar/clean" (The entry point is created using TypeScript paths at the repo root tsconfig)
- 'tsconfig.project-references.json' at the repo root is our solution file, containing links to the projects
- Base tsconfig
```
{
"compilerOptions": {
"baseUrl": "./",
"outDir": "out-tsc/frontend/base",
"incremental": true,
"composite": true,
"target": "ES2020",
"lib": ["ES2022", "DOM"],
"jsx": "react",
"jsxFactory": "React.createElement",
"jsxFragmentFactory": "React.Fragment",
"types": [],
"module": "ESNext",
"moduleResolution": "Node10",
"resolveJsonModule": true,
"importHelpers": true,
"isolatedModules": false,
"allowSyntheticDefaultImports": true,
"allowImportingTsExtensions": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"noFallthroughCasesInSwitch": true,
"skipLibCheck": true,
"experimentalDecorators": false,
"verbatimModuleSyntax": true,
"emitDeclarationOnly": true,
"paths": {
"bar/clean": ["packages/bar/src/clean.ts"]
}
}
}
```
# Steps to reproduce the issue
1. Run TypeScript build with project references. This should create a 'tsDist' folder at the repo root with emitted declarations:
```yarn tsc -b tsconfig.project-references.json -v```
2. Delete the file 'packages/bar/src/clean.ts'. This file is consumed by 'packages/foo/src/index2.ts'. Note that we have paths defined for 'bar/clean' in 'tsconfig.json':
```rm packages/bar/src/clean.ts```
3. Run TypeScript build with project references again. This should update the 'tsDist' folder at the repo root with emitted declarations. However, it doesn't throw any error, even though 'packages/foo/src/index2.ts' is consuming a non-existent file:
```yarn tsc -b tsconfig.project-references.json -v```
Repository for reproduction - https://github.com/sudesh-atlassian/project-references-debug-1.git
Tested with typescript version - v5.4.2, v5.5.2 and v5.6.2
# Queries
- Is this expected behaviour, where typecheck would not typecheck when we have a cache and a file is deleted
- Is there a known way to mitigate this issue
| Bug | low | Critical |
2,610,321,258 | ui | [bug]: Cannot install sidebar | ### Describe the bug

### Affected component/components
Sidebar
### How to reproduce
npx shadcn@latest add sidebar
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,610,364,072 | ui | [bug]: Custom utils path inside `components.json` is being ignored | ### Describe the bug
Here's my `components.json` file:
```
{
"$schema": "https://ui.shadcn.com/schema.json",
"style": "new-york",
"rsc": true,
"tsx": true,
"tailwind": {
"config": "tailwind.config.ts",
"css": "src/styles/globals.css",
"baseColor": "gray",
"cssVariables": true,
"prefix": ""
},
"aliases": {
"components": "~/shared/components",
"utils": "~/shared/lib",
"ui": "~/shared/ui-kit",
"lib": "~/shared/lib",
"hooks": "~/shared/hooks"
}
}
```
However, when adding component `cn` is always being imported from `~/lib/utils` even though I explicitly specified it from `~/shared/lib`.
### Affected component/components
Every
### How to reproduce
1. Modify your `components.json` to match mine.
2. Try to add a component, for example `npx shadcn@latest add sidebar`
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Node@20.18.0, React@19.0.0-rc-69d4b800-20241021, NextJS@15.0.1
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,610,389,525 | three.js | BatchedMesh: Reorganization discussion | ### Description
Over the last few releases BatchedMesh has had a number of features added to support optimization, resizing, instancing, etc which has made the class very powerful and flexible. But there have been concerns about complexity and surface API, as well as requests for more features including point & line rendering (#29018), and multi-material support (#27930), etc. I'm happy to rework some of the API of the class but I'll request some more details on what exactly the concerns are and what the more tangible impact / problem is so we can discuss solutions.
cc @mrdoob @Mugen87
### Solution(s)
- In the short term for this release we can comment the recently-added `optimize`, `setGeometrySize`, `setGeometryCount` functions to avoid future breaking changes if these are going to be moved.
- Introduce some kind of subclassing or flags for more complex rendering or other render types (like lines, points).
- One solution I would like to discuss is moving geometry management to a `BatchedBufferGeometry` class so that the logic complexity for the addition and removal of sub-geomeries would be encapsulated there. The `BatchedMesh` class could continue to pass-through the function calls to the batched mesh. Ultimately the goal being to separate some of the logic out from an otherwise monolithic file. This could also allow for more easily sharing geometry between multiple BatchedMeshes in the case that different shader are needed (#29018).
A class could be structured like so:
```js
class BatchedBufferGeometry {
constructor( indexCount, vertexCount );
addGeometry( ... );
setGeometryAt( ... );
deleteGeometry( ... );
getBoundingBoxAt( ... );
getBoundingSphereAt( ... );
getGeometryRangeAt( ... );
setGeometrySize( ... );
optimize();
}
```
BatchedMesh would otherwise still be responsible for managing instances.
### Alternatives
Leave it as-is - other suggestions.
### Additional context
_No response_ | Suggestion | low | Major |
2,610,414,660 | flutter | [in_app_purchase] in android causing crashes in some devices | ### Steps to reproduce
` await _inAppPurchase.queryProductDetails(productIds);`
calling query product details causing error
### Expected results
it shouldn't cause any crashes
### Actual results
it causing app to crash
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>




</details>
### Logs
<details open><summary>Logs</summary>
```
Exception java.lang.OutOfMemoryError:
at com.google.android.gms.internal.play_billing.zzgr.zzy (zzgr.java)
at com.google.android.gms.internal.play_billing.zzdd.zzj (zzdd.java)
at com.google.android.gms.internal.play_billing.zzcz.<init> (zzcz.java)
at com.google.android.gms.internal.play_billing.zzgn.<init> (zzgn.java)
at com.google.android.gms.internal.play_billing.zzgr.zzy (zzgr.java)
at com.google.android.gms.internal.play_billing.zzdd.zzh (zzdd.java)
at com.google.android.gms.internal.play_billing.zzgr.zzz (zzgr.java)
at com.android.billingclient.api.zzbx.zzb (zzbx.java)
at com.android.billingclient.api.BillingClientImpl.queryProductDetailsAsync (BillingClientImpl.java)
at io.flutter.plugins.inapppurchase.MethodCallHandlerImpl.queryProductDetailsAsync (MethodCallHandlerImpl.java)
at io.flutter.plugin.common.BasicMessageChannel$IncomingMessageHandler.onMessage (BasicMessageChannel.java)
at io.flutter.embedding.engine.dart.DartMessenger.invokeHandler (DartMessenger.java)
at io.flutter.embedding.engine.dart.DartMessenger.lambda$dispatchMessageToQueue$0 (DartMessenger.java)
at android.os.Handler.handleCallback (Handler.java:942)
at android.os.Handler.dispatchMessage (Handler.java:99)
at android.os.Looper.loopOnce (Looper.java:201)
at android.os.Looper.loop (Looper.java:288)
at android.app.ActivityThread.main (ActivityThread.java:8010)
at java.lang.reflect.Method.invoke
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:566)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:957)
```
### io.flutter.plugin.common.StandardMessageCodec.writeValue
```
Exception java.lang.OutOfMemoryError:
at java.lang.StringFactory.newStringFromBytes
at java.lang.StringLatin1.newString (StringLatin1.java:738)
at java.lang.StringBuilder.toString (StringBuilder.java:474)
at io.flutter.embedding.engine.dart.DartMessenger.lambda$dispatchMessageToQueue$0 (DartMessenger.java)
at android.os.Handler.handleCallback (Handler.java:942)
at android.os.Handler.dispatchMessage (Handler.java:99)
at android.os.Looper.loopOnce (Looper.java:201)
at android.os.Looper.loop (Looper.java:288)
at android.app.ActivityThread.main (ActivityThread.java:8010)
at java.lang.reflect.Method.invoke
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:566)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:957)
```
### io.flutter.plugin.common.StandardMessageCodec.writeValue
```
Exception java.lang.OutOfMemoryError:
at libcore.util.CharsetUtils.toUtf8Bytes
at java.lang.String.getBytes (String.java:1207)
at io.flutter.plugin.common.StandardMessageCodec.writeValue (StandardMessageCodec.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApiCodec.writeValue (Messages.java)
at io.flutter.plugin.common.StandardMessageCodec.writeValue (StandardMessageCodec.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApiCodec.writeValue (Messages.java)
at io.flutter.plugin.common.StandardMessageCodec.writeValue (StandardMessageCodec.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApiCodec.writeValue (Messages.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApiCodec.writeValue (Messages.java)
at io.flutter.plugin.common.StandardMessageCodec.writeValue (StandardMessageCodec.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApiCodec.writeValue (Messages.java)
at io.flutter.plugin.common.StandardMessageCodec.encodeMessage (StandardMessageCodec.java)
at io.flutter.plugin.common.BasicMessageChannel$IncomingMessageHandler$1.reply (BasicMessageChannel.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApi$7.success (Messages.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApi$7.success (Messages.java)
at io.flutter.plugins.inapppurchase.MethodCallHandlerImpl.lambda$queryProductDetailsAsync$4 (MethodCallHandlerImpl.java)
at com.android.billingclient.api.BillingClientImpl.queryProductDetailsAsync (BillingClientImpl.java)
at io.flutter.plugins.inapppurchase.MethodCallHandlerImpl.queryProductDetailsAsync (MethodCallHandlerImpl.java)
at io.flutter.plugin.common.BasicMessageChannel$IncomingMessageHandler.onMessage (BasicMessageChannel.java)
at io.flutter.embedding.engine.dart.DartMessenger.invokeHandler (DartMessenger.java)
at io.flutter.embedding.engine.dart.DartMessenger.lambda$dispatchMessageToQueue$0 (DartMessenger.java)
at android.os.Handler.handleCallback (Handler.java:942)
at android.os.Handler.dispatchMessage (Handler.java:99)
at android.os.Looper.loopOnce (Looper.java:201)
at android.os.Looper.loop (Looper.java:288)
at android.app.ActivityThread.main (ActivityThread.java:8010)
at java.lang.reflect.Method.invoke
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:566)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:957)
```
```
Exception java.lang.OutOfMemoryError:
at io.flutter.plugin.common.StandardMessageCodec.readValueOfType (StandardMessageCodec.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApiCodec.readValueOfType (Messages.java)
at io.flutter.plugin.common.StandardMessageCodec.readValue (StandardMessageCodec.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApiCodec.readValueOfType (Messages.java)
at io.flutter.plugin.common.StandardMessageCodec.readValue (StandardMessageCodec.java)
at io.flutter.plugin.common.StandardMessageCodec.readValueOfType (StandardMessageCodec.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApiCodec.readValueOfType (Messages.java)
at io.flutter.plugin.common.StandardMessageCodec.readValue (StandardMessageCodec.java)
at io.flutter.plugin.common.StandardMessageCodec.readValueOfType (StandardMessageCodec.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApiCodec.readValueOfType (Messages.java)
at io.flutter.plugin.common.StandardMessageCodec.readValue (StandardMessageCodec.java)
at io.flutter.plugin.common.StandardMessageCodec.decodeMessage (StandardMessageCodec.java)
at io.flutter.plugin.common.BasicMessageChannel$IncomingMessageHandler.onMessage (BasicMessageChannel.java)
at io.flutter.embedding.engine.dart.DartMessenger.invokeHandler (DartMessenger.java)
at io.flutter.embedding.engine.dart.DartMessenger.lambda$dispatchMessageToQueue$0 (DartMessenger.java)
at android.os.Handler.handleCallback (Handler.java:942)
at android.os.Handler.dispatchMessage (Handler.java:99)
at android.os.Looper.loopOnce (Looper.java:201)
at android.os.Looper.loop (Looper.java:288)
at android.app.ActivityThread.main (ActivityThread.java:8010)
at java.lang.reflect.Method.invoke
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:566)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:957)
```
```
Exception java.lang.OutOfMemoryError:
at java.io.ByteArrayOutputStream.<init> (ByteArrayOutputStream.java:79)
at java.io.ByteArrayOutputStream.<init> (ByteArrayOutputStream.java:64)
at io.flutter.plugin.common.StandardMessageCodec$ExposedByteArrayOutputStream.<init> (StandardMessageCodec.java)
at io.flutter.plugin.common.StandardMessageCodec.encodeMessage (StandardMessageCodec.java)
at io.flutter.plugin.common.BasicMessageChannel$IncomingMessageHandler$1.reply (BasicMessageChannel.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApi$7.success (Messages.java)
at io.flutter.plugins.inapppurchase.Messages$InAppPurchaseApi$7.success (Messages.java)
at io.flutter.plugins.inapppurchase.MethodCallHandlerImpl.lambda$queryProductDetailsAsync$4 (MethodCallHandlerImpl.java)
at com.android.billingclient.api.BillingClientImpl.queryProductDetailsAsync (BillingClientImpl.java)
at io.flutter.plugins.inapppurchase.MethodCallHandlerImpl.queryProductDetailsAsync (MethodCallHandlerImpl.java)
at io.flutter.plugin.common.BasicMessageChannel$IncomingMessageHandler.onMessage (BasicMessageChannel.java)
at io.flutter.embedding.engine.dart.DartMessenger.invokeHandler (DartMessenger.java)
at io.flutter.embedding.engine.dart.DartMessenger.lambda$dispatchMessageToQueue$0 (DartMessenger.java)
at android.os.Handler.handleCallback (Handler.java:942)
at android.os.Handler.dispatchMessage (Handler.java:99)
at android.os.Looper.loopOnce (Looper.java:201)
at android.os.Looper.loop (Looper.java:288)
at android.app.ActivityThread.main (ActivityThread.java:8010)
at java.lang.reflect.Method.invoke
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:566)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:957)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.0.1 24A348 darwin-arm64, locale en-PK)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0-rc4)
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] VS Code (version 1.94.2)
[✓] Connected device (3 available)
[✓] Network resources
• No issues found!
```
</details>
| c: crash,platform-android,p: in_app_purchase,package,perf: memory,P2,team-android,triaged-android | low | Critical |
2,610,454,192 | PowerToys | Allow Awake to be turned on/off with a keyboard shortcut | ### Description of the new feature / enhancement
Title says it all.
### Scenario when this would be used?
So many options in PowerToys can be turned on/off with a keyboard shortcut. Whyc not Awake? I use it often, and it would make it a lot more practical.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Product-Awake | low | Minor |
2,610,481,704 | godot | Triple quoted strings cannot be compared to typical strings with the Expression class | ### Tested versions
- Reproducible in v4.3.stable.artix_linux
### System information
Godot v4.3.stable unknown - Artix Linux
### Issue description
In the editor, if you compare a double quoted string to a triple double quoted string, they are equivalent.
If you use the Expression class to compare them, they are not equivalent.
I'm guessing it shouldn't matter whether it's a double, single, triple double or triple single quoted string. They should all be the same if the contents mean the same thing.
### Steps to reproduce
func evaluate():
if "x" == """x""":
print("sane")
else:
print("insane")
var xp = Expression.new()
var err = xp.parse('"x" == """x"""', [])
if err != OK:
push_error("failed to parse")
var res = xp.execute([])
if xp.has_execute_failed():
push_error(xp.get_error_text())
print('Does expression.execute() think they are the same?')
print(res)
Prints:
sane
Does expression.execute() think they are the same?
false
Should print:
sane
Does expression.execute() think they are the same?
true
For any combination of quote types.
### Minimal reproduction project (MRP)
[godot-quote-bug.zip](https://github.com/user-attachments/files/17501407/godot-quote-bug.zip)
| bug,topic:core | low | Critical |
2,610,543,731 | PowerToys | Color of title bar of thumbnail of cropped window is white instead of system theme color and thumbnail marked area is off. | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Crop and Lock
### Steps to reproduce
1. Keep desired window which needs to be thumbnailed active.
2. Hold win + ctrl + shift + T to launch Crop and Lock
3. Mark out the area of the window you want a thumbnail of.
4. Thumbnail window is created with a white title bar
### ✔️ Expected Behavior
Title bar of thumbnail window should be in system theme color. So if theme color is dark and all windows have dark title bar, the thumbnail window should also have the dark theme (not referring to the accent color of windows).
Noticed the same thing happening with the Reparent function as well. It would be helpful to have the system colors show up on the tile bar of it too.
Marked out area of the thumbnail should be accurate as to the area marked out by the red border.
### ❌ Actual Behavior
Title bar color is white.
Saved area is off a bit, leaving a vertical space to the left and crops off a bit of the right side of the marked out area.
### Other Software
Windows 11 explorer window was being used, in dark mode, to make a thumbnail of.
Attaching a screenshot for your reference.

Thank you so much for making these handy tools !! They are so helpful :)
Drake.
| Issue-Bug,Needs-Triage | low | Minor |
2,610,551,408 | pytorch | compile + allgather with group will fail for stack-style allgather | ### 🐛 Describe the bug
this simple code:
```python
import torch
import torch.distributed as dist
dist.init_process_group(backend="nccl")
group = None
def all_gather(input_: torch.Tensor, dim: int = -1, use_group=True) -> torch.Tensor:
if use_group:
world_size = dist.get_world_size(group)
else:
world_size = dist.get_world_size()
# Bypass the function if we are using only 1 GPU.
if world_size == 1:
return input_
assert -input_.dim() <= dim < input_.dim(), (
f"Invalid dim ({dim}) for input tensor with shape {input_.size()}")
if dim < 0:
# Convert negative dim to positive.
dim += input_.dim()
input_size = input_.size()
# Allocate output tensor.
output_tensor = torch.empty((world_size, ) + input_size,
dtype=input_.dtype,
device=input_.device)
# All-gather.
if use_group:
torch.distributed.all_gather_into_tensor(output_tensor,
input_, group=group)
else:
torch.distributed.all_gather_into_tensor(output_tensor,
input_)
# Reshape
output_tensor = output_tensor.movedim(0, dim)
output_tensor = output_tensor.reshape(input_size[:dim] +
(world_size *
input_size[dim], ) +
input_size[dim + 1:])
return output_tensor
def f(x):
x = x + 1
return all_gather(x)
torch.cuda.set_device(dist.get_rank())
x = torch.randn(10, 20, dtype=torch.bfloat16).cuda()
y = f(x)
opt_f = torch.compile(f)
torch._dynamo.mark_dynamic(x, 0)
opt_y = opt_f(x)
```
run it with `torchrun --nproc-per-node=2 test.py` , will fail with:
```text
torch._dynamo.exc.TorchRuntimeError: Failed running call_method copy_(*(FakeTensor(..., device='cuda:0', size=(2, s0, 20), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(2*s0, 20), dtype=torch.bfloat16)), **{}):
[rank0]: expand: attempting to expand a dimension of length 2*s0!
```
if I change to `use_group=False` , it works without any problem.
I think the problem is
https://github.com/pytorch/pytorch/blob/96b30dcb25c80513769dae2a8688aec080b00117/torch/distributed/_functional_collectives.py#L1005-L1020
it will only use the concat version of allgather: input size `[a]`, output size `[a * world_size]`
what I want to use is stack version of allgather: input size `[a]`, output size `[world_size, a]`
### Versions
I have tested it in pytorch 2.4, 2.5 and `2.6.0.dev20241022+cu124` . they all have this problem.
I can have a workaround, by changing to the concat version of allgather. but this is definitely a bug we should fix.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @chauhang @penguinwu @zou3519 | oncall: distributed,triaged,oncall: pt2,vllm-compile,pt2d-triage-nov2024 | low | Critical |
2,610,585,403 | react-native | TextInput onContentSizeChange triggers twice inside a Modal | ### Description
When using a TextInput inside a Modal, the onContentSizeChange callback is triggered twice instead of once. Outside the Modal the event triggers correctly (only once).
### Steps to reproduce
1. Install and launch the application
2. Observe that onContentSizeChange triggers once (as expected)
3. Click on the "Show modal" button
4. Notice that onContentSizeChange for the TextInput inside the Modal triggers twice (unexpected behavior)
### React Native Version
0.76.0
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: Windows 10 10.0.19045
CPU: (8) x64 Intel(R) Core(TM) i5-8350U CPU @ 1.70GHz
Memory: 9.86 GB / 31.84 GB
Binaries:
Node:
version: 20.12.1
path: C:\Program Files\nodejs\node.EXE
Yarn: Not Found
npm:
version: 9.8.1
path: C:\Program Files\nodejs\npm.CMD
Watchman: Not Found
SDKs:
Android SDK:
Android NDK: 22.1.7171670
Windows SDK: Not Found
IDEs:
Android Studio: AI-231.9392.1.2311.11330709
Visual Studio: Not Found
Languages:
Java: 17.0.8
Ruby: Not Found
npmPackages:
"@react-native-community/cli":
installed: 15.0.0-alpha.2
wanted: 15.0.0-alpha.2
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.0
wanted: 0.76.0
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
(NOBRIDGE) LOG 39.607845306396484
(NOBRIDGE) LOG 20.39215660095215
(NOBRIDGE) LOG 39.607845306396484
```
### Reproducer
https://github.com/ilaloov/textinput-issue
### Screenshots and Videos
_No response_ | Issue: Author Provided Repro,Component: TextInput,Component: Modal,Newer Patch Available,Type: New Architecture,0.76 | low | Major |
2,610,606,194 | PowerToys | Maximize window button on title bar for Reparent function of Crop and Lock | ### Description of the new feature / enhancement
To be able to maximize the cropped window to make edits, and resize the window again to it's cropped size as before.
Currently only the minimize and close options are available.
### Scenario when this would be used?
Many times, the cropped window needs to be edited in, for a short period of time and then returned to it's cropped state.
Currently as there is no maximize window option, the only solution is to close the window and re-crop it again which will be unproductive and break the flow state and momentum of deep work.
### Supporting information
Using Win 11 24H2 version and Power Toys v 0.85.1 in administrator mode. | Needs-Triage | low | Minor |
2,610,658,144 | transformers | safetensor/mmap memory leak when per-layer weights are converted do other dtypes | ### System Info
While working on [GTPQModel](https://github.com/modelcloud/gptqmodel) which does gptq quantization of hf models and load each layer on to gpu, quantize, and then move layer back to cpu for vram reduction, we noticed a huge cpu memory leak == to layer weight of dtype when moving the layer from from dtypes. The layer stays in cpu memory, leaks, and we are unable to free it. The memory stays until the program ends. The leak happens happens when we do the dtype conversion on cpu or to gpu.
Is this is a internal memory leak or are we doing something wrong or have the wrong expectation to how transformers/torch handles cpu tensor memory?
Reproducing code on cpu only (to gpu has same bug). No gpu is nesssary, just load the model as `bfloat16` and do dtype transitions and observe memory leak.
### Who can help?
@ArthurZucker @SunMarc @MekkCyber
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Env:
```
AMD Zen3
Ubuntu: 22.04
Name: torch
Version: 2.5.0
Name: accelerate
Version: 1.0.1
Name: transformers
Version: 4.45.2
```
```python
import gc
import torch
from memory_profiler import memory_usage
from transformers import AutoModelForCausalLM
def mem(msg: str):
gc.collect()
m = memory_usage()[0]
mm = f"{msg}. memory_usage: {m} MiB"
print(mm)
MODEL_ID = "meta-llama/Llama-3.1-8B-Instruct"
mem("load model before")
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.bfloat16,
device_map="cpu",
)
mem("load model after")
print("model", model)
for i in range(0, 32):
mem("to float32 before") # ref point: each layer is ~400MB in bfloat16
model.model.layers[i].to(torch.float32)
mem("to float32 after"). # <--- +1200MB ram == leak 400MB (1200MB - 800MB (float32)).
```
Run the above code and watch the cpu memory usage grow linearly after each loop by 1200MB instead of expected 800MB (400MB leak per layer equal to the size of the layer in `bfloat16` before conversion). Gc() does not help.
### Expected behavior
Constant memory equal to model weights/dtype combo.
UPDATE: Looks the leak is isolated to model/layers loaded as `torch.bfloat16`. No memory leak observed if model/layer is first loaded as `torch.float16` or `torch.float32` and conversion to other dtypes. | Core: Modeling,Quantization,bug,contributions-welcome | medium | Critical |
2,610,689,836 | tensorflow | How to map GatherElements with TFLite operator | **System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 24.04
- TensorFlow installed from (source or binary): binary
- TensorFlow version (or github SHA if from source): the latest version
**Standalone code to reproduce the issue**
[WebNN GatherElements](https://source.chromium.org/chromium/chromium/src/+/main:services/webnn/public/mojom/webnn_graph.mojom;l=606?q=webnn_graph.&ss=chromium%2Fchromium%2Fsrc) operation gather elements from the axis dimension of the input tensor indexed by the indices tensor following the equation below:
```
output[dIndex0, ..., dIndexN] = input[dIndex0, ..., indices[dIndex0, ..., dIndexN], ..., dIndexN]
^ This is dAxis, indicated by `axis` parameter.
```
For example:
```
an input = [[ 0, 1, 2],
[10, 11, 12],
[20, 21, 22]] with shape (3, 3),
an indices = [[1, 0],
[2, 1],
[0, 2]] with shape (3, 2),
and axis = 1,
the output should be [[ 1, 0],
[12, 11],
[20, 22]] with shape (3, 2).
```
WebNN has supported gather and gatherND like below table:
| WebNN operation | TFLite operator| Status|
| ------------ | ---------------| ---------------|
| gather | [tfl.gather](https://www.tensorflow.org/mlir/tfl_ops#tflgather_tflgatherop) | Done |
| gatherND| [tfl.gather_nd](https://www.tensorflow.org/mlir/tfl_ops#tflgather_nd_tflgatherndop) | Done |
| gatherElements| | |
but there is no TFLite operator to map with gatherElements, can the [BuiltinOperator_STABLEHLO_GATHER](https://source.chromium.org/chromium/chromium/src/+/main:third_party/tflite/src/tensorflow/compiler/mlir/lite/schema/schema_generated.h;l=1221?q=BuiltinOperator_STABLEHLO_GATHER&ss=chromium%2Fchromium%2Fsrc) implement the `gatherElements` operation?
**Any other info / logs**
WebNN gatherElements is similar with [ONNX GatherElements](https://onnx.ai/onnx/operators/onnx__GatherElements.html).
| stat:awaiting tensorflower,comp:lite,type:others,2.17 | low | Minor |
2,610,715,641 | pytorch | Batching rule not defined for `aten::_make_dual`. | ### 🐛 Describe the bug
I am trying to call `torch.vmap` on `torch.jacfwd`. This works fine normally but raises the following error when called under `torch.inference_mode()`.
```
File [...]/torch/autograd/forward_ad.py:129, in make_dual(tensor, tangent, level)
124 if not (tangent.is_floating_point() or tangent.is_complex()):
125 raise ValueError(
126 f"Expected tangent to be floating point or complex, but got: {tangent.dtype}"
127 )
--> 129 return torch._VF._make_dual(tensor, tangent, level=level)
RuntimeError: Batching rule not implemented for aten::_make_dual; the fallback path doesn't work on out= or view ops.
```
### Versions
Versions:
```
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.14 (main, Aug 9 2024, 22:29:10) [GCC 11.4.0] (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
...
Nvidia driver version: 560.28.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] open_clip_torch==2.28.0
[pip3] optree==0.13.0
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] torch==2.5.0
[pip3] torch_scatter==2.1.2.dev4
[pip3] torch-tb-profiler==0.4.3
[pip3] torchao==0.6.1
[pip3] torchaudio==2.5.0
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.5.0
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
```
cc @zou3519 @Chillee @samdow @kshitij12345 | triaged,actionable,module: vmap,inference mode,module: functorch | low | Critical |
2,610,725,264 | tensorflow | How to map ScatterElements with TFLite operator | **System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 24.04
- TensorFlow installed from (source or binary): binary
- TensorFlow version (or github SHA if from source): the latest version
**Standalone code to reproduce the issue**
[WebNN scatterElements](https://source.chromium.org/chromium/chromium/src/+/main:services/webnn/public/mojom/webnn_graph.mojom;l=1028?q=webnn_graph.&ss=chromium%2Fchromium%2Fsrc) operation first copies the values of `input` tensor to `output` tensor, and then overwrites the values of `output` tensor to values specified by `updates` tensor at specific index positions specified by `indices` tensor along `axis` dimension.
For example: Scatter elements along axis 0
```
input = [[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0]]
indices = [[1, 0, 2],
[0, 2, 1]]
updates = [[1.0, 1.1, 1.2],
[2.0, 2.1, 2.2]]
output = [[2.0, 1.1, 0.0]
[1.0, 0.0, 2.2]
[0.0, 2.1, 1.2]]
```
there is no TFLite operator to map with scatterElements, can the [BuiltinOperator_STABLEHLO_SCATTER](https://source.chromium.org/chromium/chromium/src/+/main:third_party/tflite/src/tensorflow/compiler/mlir/lite/schema/schema_generated.h;l=1210;bpv=0;bpt=1) implement the `scatterElements` operation?
**Any other info / logs**
WebNN scatterElements is similar with [ONNX ScatterElements](https://onnx.ai/onnx/operators/onnx__ScatterElements.html).
| stat:awaiting tensorflower,type:feature,comp:lite | low | Minor |
2,610,782,019 | flutter | iOS Autofill "hide my email" does not work | ### Steps to reproduce
1. Use provided sample code (or build a simple app with a TextField)
2. Run app
3. Tap on text field
4. Tap on "Hide My Email"
5. Create a new "Hide My Email" address (or use the existing)
### Expected results
I expect the TextField to be filled with the created / selected email address.
### Actual results
The TextField stays empty.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
title: 'Test',
home: DetailPage(),
);
}
}
class DetailPage extends StatefulWidget {
const DetailPage({super.key});
@override
State<DetailPage> createState() => _DetailPageState();
}
class _DetailPageState extends State<DetailPage> {
final TextEditingController _emailController = TextEditingController();
@override
void dispose() {
_emailController.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: SafeArea(
child: Form(
child: CupertinoFormSection.insetGrouped(
header: const Text('EMAIL'),
children: [
CupertinoTextFormFieldRow(
controller: _emailController,
prefix: Icon(CupertinoIcons.mail,
color: CupertinoColors.label.resolveFrom(context)),
keyboardType: TextInputType.emailAddress,
autofillHints: const [AutofillHints.email],
)
],
),
),
),
);
}
}
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.0, on macOS 15.0.1 24A348 darwin-arm64, locale de-DE)
• Flutter version 3.24.0 on channel stable at /Users/dirkmika/projects/SDKs/Flutter/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 80c2e84975 (3 months ago), 2024-07-30 23:06:49 +0700
• Engine revision b8800d88be
• Dart version 3.5.0
• DevTools version 2.37.2
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/dirkmika/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.15.2
[✗] Chrome - develop for the web (Cannot find Chrome executable at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Android Studio (version 2023.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314)
[✓] VS Code (version 1.94.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (3 available)
• Mein 13 Pro (mobile) • 00008110-000A34300C62801E • ios • iOS 18.0.1 22A3370
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0.1 24A348 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0.1 24A348 darwin-arm64
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
| a: text input,platform-ios,P3,team-text-input,triaged-text-input | low | Major |
2,610,787,630 | go | govulncheck-action: suggestion allow specifying cache-dependency-path | Hi!
The `govulncheck-action` GitHub Action uses `setup-go` as one of its steps, but it's not possible to set the location of cache using `cache-dependency-path` so monorepo setups can't use the cache.
Is it possible to add this as an input variable for `govulncheck-action`? | NeedsInvestigation,vulncheck or vulndb | low | Minor |
2,610,835,249 | godot | `Tree` does not update its view after `SetSelected` call | ### Tested versions
- Reproducible in Godot v4.3.stable.mono.official
### System information
Windows 10 - Godot v4.3.stable.mono.official [77dcf97d8]
### Issue description
When calling `SetSelected` on a `Tree`, its view does not update.
### Steps to reproduce
Run the scene and press space to set Item1 or Item2 selected.
Here is the scene hierarchy and the script that is added to the Node2D:
- Scene
- Node2D
```
using Godot;
public partial class TreeTest : Node2D
{
private Tree tree;
private TreeItem item1;
private TreeItem item2;
public override void _Ready()
{
tree = new Tree();
tree.Size = new Vector2(200, 140);
var root = tree.CreateItem();
root.SetText(0, "Root");
item1 = tree.CreateItem(root);
item2 = tree.CreateItem(root);
item1.SetText(0, "Item1");
item2.SetText(0, "Item2");
AddChild(tree);
}
private bool first = false;
public override void _Input(InputEvent evt)
{
if (evt is InputEventKey keyEvent && keyEvent.Pressed)
{
if (keyEvent.Keycode == Key.Space)
{
tree.SetSelected(first ? item1 : item2, 0);
first = !first;
}
}
}
}
```
By pressing space we would expect the Item1 and Item2 to be highlighted one after another.
However the Items do not highlight until scrolling the mouse wheel over the tree.
A quick hack to fix this is to disable and enable the tree visibility after setting the selected item.
```
...
if (keyEvent.Keycode == Key.Space)
{
tree.SetSelected(first ? item1 : item2, 0);
first = !first;
//hack to fix issue
tree.Visible = false;
tree.Visible = true;
}
...
```
### Minimal reproduction project (MRP)
[treebug.zip](https://github.com/user-attachments/files/17503762/treebug.zip)
| bug,topic:gui | low | Critical |
2,610,840,447 | react | [Compiler Bug]: `useLayoutEffect` without dependency array should be allowed by either `react-hooks/exhaustive-deps` or `react-compiler/react-compiler` | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
N/A
### Repro steps
Use `useLayoutEffect` without dependency array, for example:
```tsx
export default function Breadcrumb({ children, to }) {
const itemRef = useRef<HTMLLIElement>(null);
const [isLast, setIsLast] = useState<boolean>();
useLayoutEffect(() => {
setIsLast(itemRef.current?.matches(':last-child') ?? false);
});
return (
<li ref={itemRef}>
<a to={to} aria-current={isLast ? 'page' : undefined}>
{children}
</a>
</li>
);
}
```
This will generate a `react-hooks/exhaustive-deps` warning for using `useLayoutEffect` without dependencies, but the dependency array is optional when you want the effect to run on every render, which I do in this case. Up till now I had just ignored this warning using `// eslint-disable-next-line react-hooks/exhaustive-deps`, but now this will then generate a `react-compiler/react-compiler` warning:
> React Compiler has skipped optimizing this component because one or more React ESLint rules were disabled. React Compiler only works when your components follow all the rules of React, disabling them may result in unexpected or incorrect behavior
So... either you should fix the `react-hooks/exhaustive-deps` rule to allow effects without dependencies (`useEffect` seem to allow this, but not `useLayoutEffect` for some reason), or the compiler should be smarter about what disabled ESLint rules to take into account...
### How often does this bug happen?
Every time
### What version of React are you using?
react@18.3.1
### What version of React Compiler are you using?
eslint-plugin-react-compiler@19.0.0-beta-8a03594-20241020 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,610,890,440 | kubernetes | Why remove client tag in apiserver metrics? | ### What would you like to be added?
`apiserver_request_count{client="Go-http-client/2.0",code="200",contentType="application/json",resource="nodes",scope="cluster",subresource="",verb="LIST"} 3486`
### Why is this needed?
we need this tag to analysis different component request detail. | sig/api-machinery,kind/feature,triage/accepted | low | Minor |
2,610,950,919 | ui | [bug]: Sidebar Group Label is overflowing on Sidebar Trigger when menu is closed with collapsible="icon" | ### Describe the bug

As it can be seen on the screen capture, when you hover over the SidebarTrigger, the SidebarGroupLabel is focused by the cursor when menu is closed instead of focusing the trigger with `collapsible` set to `icon`.
### Affected component/components
Sidebar
### How to reproduce
_app/sidebar.tsx_
```tsx
<Sidebar collapsible="icon">
<SidebarHeader></SidebarHeader>
<SidebarGroup>
<SidebarGroupLabel>Application</SidebarGroupLabel>
<SidebarGroupContent>
<SidebarMenu>
{items.map((item) => (
<SidebarMenuItem key={item.title}>
<SidebarMenuButton asChild>
<Link href={'/dashboard' + item.url}>
<item.icon />
<span>{item.title}</span>
</Link>
</SidebarMenuButton>
</SidebarMenuItem>
))}
</SidebarMenu>
</SidebarGroupContent>
</SidebarGroup>
<SidebarFooter />
</Sidebar>
```
### Codesandbox/StackBlitz link
https://codesandbox.io/p/devbox/nzcszc
### Logs
_No response_
### System Info
```bash
No more relevant infos as it can be fixed by adding overflow hidden to the group component
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,610,970,603 | PowerToys | Multi-Monitor Brightness Control | ### Description of the new feature / enhancement
Powertoys already offers some great things, but one thing that seems like it's lacking, is some sort of a plugin or flyout in tray to control brightness of multi-monitors.
### Scenario when this would be used?
If someone uses multiple monitors as a lot of people do, this will become really handy
### Supporting information
Maybe something like this awesome project: https://github.com/xanderfrangos/twinkle-tray
@xanderfrangos | Needs-Triage | low | Minor |
2,611,024,266 | godot | Export Android: Godot fails to build an APK file but then it builds successfully. | ### Tested versions
Godot 4.3 official release.
### System information
Windows 11, Godot 4.3.
### Issue description
Here is my projeect:
[capybara_math_test.zip](https://github.com/user-attachments/files/17504599/capybara_math_test.zip)
I've recorded a video about the bug (export to android):
[BugVideo.zip](https://github.com/user-attachments/files/17504470/BugVideo.zip)
Problem Description:
I've found that if I build an APK file with UseGradle=off the APK file will be corrupted/damaged and failed to build. Then if I set UseGradle=on and build again the APK file still will be corrupted. Then if I set back UseGradle=off and build again the APK file will be built successfully! A very strange bug.
My setup:
Java SDK 23.0.1
Android Studio 2024
### Steps to reproduce
1 Download Java SDK 23 https://download.oracle.com/java/23/latest/jdk-23_windows-x64_bin.zip
2 Install Android Studio 2024 https://developer.android.com/studio
3 Install Android Packages. Tools ->SDKManager.

4 Open my Godot project
[capybara_math_test.zip](https://github.com/user-attachments/files/17504599/capybara_math_test.zip)
5 Fix paths of AndroidSDK and JavaSDK

6 Build a project with Gradle=off

You will see this notification. The build fails and The APK file will be corrupted/broken.

7 Install Build Temlate:

8 Try to build again with Gradle=on

The build fails. You will see this notification.

9 Try to build again with Gradle=off.

The build will succeed and there will be no any warnings!!!
The build will succeed and there will be no any warnings!!!
The build will succeed and there will be no any warnings!!!
### Minimal reproduction project (MRP)
Here is my projeect:
[capybara_math_test.zip](https://github.com/user-attachments/files/17504599/capybara_math_test.zip) | platform:android,topic:buildsystem,needs testing | low | Critical |
2,611,057,223 | vscode | Shortcuts for auto indentation not working |
Type: <b>Bug</b>
I want to use the command `Reindent Selected Lines` or `Reindent Lines` with a shortcut. I tried setting keybindings to each of them (`Ctrl+Shift+Tab` and `Alt+I`, respectively), but when I press them in notebooks or scripts (using Python), nothing happens and the indentation is not corrected.
I have the _Black Formatter_ extension for Python. I know there is the `Format Selection` command but that changes the code too much for my liking. I only want to have things properly aligned/indented.
How can I make this shortcut work in notebook cells and scripts?
------------------------------------------------------------------------
**Troubleshooting information:**
Here is how the shortcuts look like:

Here is the code I am trying the shortcuts on:
```
for n in range(10):
for r, l in zip([results1, results2],
['test', 'test2']):
a = [[1, 2, 3],
[4, 5, 6]]
print(n, l, r)
```
Here is the output from "Keyboard Shortcuts Troubleshooting" when pressing `Ctrl+Shift+Tab`:
```
2024-10-23 16:55:13.548 [info] [KeybindingService]: + Ignoring single modifier ctrl due to it being pressed together with other keys.
2024-10-23 16:55:36.838 [info] [KeybindingService]: / Soft dispatching keyboard event
2024-10-23 16:55:36.840 [info] [KeybindingService]: \ Keyboard event cannot be dispatched
2024-10-23 16:55:36.841 [info] [KeybindingService]: / Received keydown event - modifiers: [ctrl], code: ControlLeft, keyCode: 17, key: Control
2024-10-23 16:55:36.841 [info] [KeybindingService]: | Converted keydown event - modifiers: [ctrl], code: ControlLeft, keyCode: 5 ('Ctrl')
2024-10-23 16:55:36.842 [info] [KeybindingService]: \ Keyboard event cannot be dispatched in keydown phase.
2024-10-23 16:55:36.857 [info] [KeybindingService]: / Soft dispatching keyboard event
2024-10-23 16:55:36.858 [info] [KeybindingService]: \ Keyboard event cannot be dispatched
2024-10-23 16:55:36.859 [info] [KeybindingService]: / Received keydown event - modifiers: [ctrl,shift], code: ShiftLeft, keyCode: 16, key: Shift
2024-10-23 16:55:36.860 [info] [KeybindingService]: | Converted keydown event - modifiers: [ctrl,shift], code: ShiftLeft, keyCode: 4 ('Shift')
2024-10-23 16:55:36.860 [info] [KeybindingService]: \ Keyboard event cannot be dispatched in keydown phase.
2024-10-23 16:55:36.916 [info] [KeybindingService]: / Soft dispatching keyboard event
2024-10-23 16:55:36.917 [info] [KeybindingService]: | Resolving ctrl+shift+Tab
2024-10-23 16:55:36.918 [info] [KeybindingService]: \ From 1 keybinding entries, matched editor.action.reindentselectedlines, when: no when condition, source: user.
2024-10-23 16:55:36.919 [info] [KeybindingService]: / Received keydown event - modifiers: [ctrl,shift], code: Tab, keyCode: 9, key: Tab
2024-10-23 16:55:36.919 [info] [KeybindingService]: | Converted keydown event - modifiers: [ctrl,shift], code: Tab, keyCode: 2 ('Tab')
2024-10-23 16:55:36.920 [info] [KeybindingService]: | Resolving ctrl+shift+Tab
2024-10-23 16:55:36.920 [info] [KeybindingService]: \ From 1 keybinding entries, matched editor.action.reindentselectedlines, when: no when condition, source: user.
2024-10-23 16:55:36.921 [info] [KeybindingService]: + Invoking command editor.action.reindentselectedlines.
2024-10-23 16:55:37.442 [info] [KeybindingService]: + Ignoring single modifier shift due to it being pressed together with other keys.
```
Here is the output from "Keyboard Shortcuts Troubleshooting" when pressing `Alt+I`:
```
2024-10-23 16:55:59.186 [info] [KeybindingService]: / Soft dispatching keyboard event
2024-10-23 16:55:59.201 [info] [KeybindingService]: \ Keyboard event cannot be dispatched
2024-10-23 16:55:59.201 [info] [KeybindingService]: / Received keydown event - modifiers: [alt], code: AltLeft, keyCode: 18, key: Alt
2024-10-23 16:55:59.202 [info] [KeybindingService]: | Converted keydown event - modifiers: [alt], code: AltLeft, keyCode: 6 ('Alt')
2024-10-23 16:55:59.202 [info] [KeybindingService]: \ Keyboard event cannot be dispatched in keydown phase.
2024-10-23 16:55:59.346 [info] [KeybindingService]: / Soft dispatching keyboard event
2024-10-23 16:55:59.346 [info] [KeybindingService]: | Resolving alt+I
2024-10-23 16:55:59.347 [info] [KeybindingService]: \ From 1 keybinding entries, matched editor.action.reindentlines, when: no when condition, source: user.
2024-10-23 16:55:59.347 [info] [KeybindingService]: / Received keydown event - modifiers: [alt], code: KeyI, keyCode: 73, key: i
2024-10-23 16:55:59.348 [info] [KeybindingService]: | Converted keydown event - modifiers: [alt], code: KeyI, keyCode: 39 ('I')
2024-10-23 16:55:59.348 [info] [KeybindingService]: | Resolving alt+I
2024-10-23 16:55:59.348 [info] [KeybindingService]: \ From 1 keybinding entries, matched editor.action.reindentlines, when: no when condition, source: user.
2024-10-23 16:55:59.349 [info] [KeybindingService]: + Invoking command editor.action.reindentlines.
2024-10-23 16:55:59.690 [info] [KeybindingService]: + Ignoring single modifier alt due to it being pressed together with other keys.
```
Here is the file I get from "Developer: Inspect Key Mappings":
```
Layout info:
{
"name": "00000816",
"id": "",
"text": "Portuguese"
}
Default Resolved Keybindings (unique only):
(...)
ctrl+shift+Tab => ctrl+shift+Tab
(...)
ctrl+shift+tab => ctrl+shift+Tab
alt+i => alt+I
```
-----------------------------------------------------------------------------------------------
**VS Code information:**
VS Code version: Code 1.94.2 (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Windows_NT x64 10.0.19045
Modes:
Remote OS version: Linux x64 5.15.0-122-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i7-1260P (16 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.71GB (5.40GB free)|
|Process Argv|--crash-reporter-id 725ad4c0-ed45-4564-8574-7f22af4753f0|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: hpc|
|OS|Linux x64 5.15.0-122-generic|
|CPUs|Intel(R) Xeon(R) Gold 6234 CPU @ 3.30GHz (8 x 0)|
|Memory (System)|15.61GB (12.03GB free)|
|VM|100%|
</details><details><summary>Extensions (32)</summary>
Extension|Author (truncated)|Version
---|---|---
python-snippets|cst|0.1.2
remotehub|Git|0.64.0
vsc-python-indent|Kev|1.18.0
jupyter-keymap|ms-|1.1.2
remote-ssh|ms-|0.115.0
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.4
azure-repos|ms-|0.40.0
remote-explorer|ms-|0.4.3
remote-repositories|ms-|0.42.0
jinja|who|0.0.8
vscode-django|bat|1.15.0
vscode-markdownlint|Dav|0.56.0
python-environment-manager|don|1.2.4
python-extension-pack|don|1.7.0
copilot|Git|1.242.0
copilot-chat|Git|0.21.2
vsc-python-indent|Kev|1.18.0
black-formatter|ms-|2024.4.0
debugpy|ms-|2024.12.0
python|ms-|2024.16.1
vscode-pylance|ms-|2024.10.1
jupyter|ms-|2024.9.1
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
autodocstring|njp|0.6.1
indent-rainbow|ode|8.3.1
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
jinja|who|0.0.8
</details>
<!-- generated by issue reporter --> | bug,editor-autoindent | low | Critical |
2,611,064,966 | rust | `unused_imports` on `pub use macros::*` should explain that `[macro_export]` macros don't need to be exported. | ### Code
```rust
mod macros {
#[macro_export]
macro_rules! some_macro {
() => {{println!("Hello, World")}};
}
}
pub use macros::*;
```
### Current output
```
warning: unused import: `macros::*`
--> src/lib.rs:8:9
|
8 | pub use macros::*;
| ^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
warning: `tester` (lib) generated 1 warning (run `cargo fix --lib -p tester` to apply 1 suggestion)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.02s
```
### Desired output
```
warning: unused import: `macros::*`
--> src/lib.rs:8:9
|
8 | pub use macros::*;
| ^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
= note: this import only imports macros exported with `#[macro_export]`, therefore the `pub use` does nothing
warning: `tester` (lib) generated 1 warning (run `cargo fix --lib -p tester` to apply 1 suggestion)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.02s
```
### Rationale and extra context
I ran into this myself, because I'm so used to `pub use ...` functions/structs to nicely organise my libraries.
So I think this can also help others who don't realise that `#[macro_export]` makes the `pub use` unnecessary.
### Other cases
_No response_
### Rust Version
rustc 1.84.0-nightly (4f2f477fd 2024-10-23)
binary: rustc
commit-hash: 4f2f477fded0a47b21ed3f6aeddeafa5db8bf518
commit-date: 2024-10-23
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
### Anything else?
_No response_ | A-lints,A-diagnostics,A-macros,T-compiler,D-terse,L-unused_imports | low | Critical |
2,611,073,308 | godot | Forward+/Mobile: Missing sky cube reflections in shaders with specific `Environment` settings | ### Tested versions
v4.3.stable.official.77dcf97d8
### System information
OpenGL API 4.2 (Core Profile) Mesa 23.2.1-1ubuntu3.1~22.04.2 - Compatibility - Using Device: Intel - Mesa Intel(R) HD Graphics 4000 (IVB GT2)
Tested also on Windows 10 with nVidia GTX 960.
### Issue description
This bug is similar to issue https://github.com/godotengine/godot/issues/53817, but that case it refers to the situation when `Reflected Light` does not work without `Sky`.
Here the situation is different, `Sky` is generated and works, but in special cases of `Environment` settings, reflections in the materials stop displaying the sky. They are not black (they look different than in `Disabled` mode). A reflection of "something" appears on the objects, but it is not the sky.
In addition, the same settings work in `Compatibility` mode.
### when it doesn't work
- `Background` == `Clear Color`
- `Ambient Light` == `Background`
This is how it looks with `Reflected Light` == `Sky`

For comparison `Reflected Light` == `Background` (this is because there is no `Sky`, same effect with `Disabled`)

Similar effect with `Background` == `Custom Color`

### when it does work
- `Background` == `Clear Color`
- `Ambient Light` == `Disabled` :warning:
- `Reflected Light` == `Sky`

With `Ambient Light` == `Color`

With `Background` == `Sky` and `Ambient Light` == `Background`

With `Background` == `Clear Color` and `Ambient Light` == `Sky`

### Steps to reproduce
Open MRP and fiddle with `Environment` settings in `test_scene_01.tscn`.
Good first test is to change `Ambient Light` between `Background` and `Disabled`.
### Minimal reproduction project (MRP)
[sky-issues-mrp.zip](https://github.com/user-attachments/files/17505092/sky-issues-mrp.zip)
| bug,topic:rendering,topic:3d | low | Critical |
2,611,094,180 | ui | [feat]: Select fields update | ### Feature description
Select field should be able to accept number values too not only string
### Affected component/components
Select
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,611,105,581 | ui | [bug]: Theme Provider creates hydration error in Next.js 15.0.1 | ### Describe the bug
Implementing dark mode, and putting `ThemeProvider` into the layout is making hydration error in newest version of Next.js (15.0.1).
<img width="954" alt="Screenshot 2024-10-24 at 12 05 15" src="https://github.com/user-attachments/assets/52e33a0d-b8c3-4cff-a1a3-d5c69dce4f03">
### Affected component/components
ThemeProvider
### How to reproduce
1. Do npm install next-themes
2. Create `ThemeProvider` component.
3. Wrap children in layout file
### Codesandbox/StackBlitz link
https://ui.shadcn.com/docs/dark-mode/next
### Logs
```bash
Hydration failed because the server rendered HTML didn't match the client. As a result this tree will be regenerated on the client. This can happen if a SSR-ed Client Component used
-className="dark"
-style={{color-scheme:"dark"}}
```
```
### System Info
```bash
Next.js 15.0.1
MacOS, Google Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | high | Critical |
2,611,124,749 | langchain | Various warnings due to Pydantic protected namespaces, such as UserWarning: Field "model_name" in JinaEmbeddings has conflict with protected namespace "model_". | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import JinaEmbeddings, LlamaCppEmbeddings, TensorflowHubEmbeddings, GooglePalmEmbeddings
```
### Error Message and Stack Trace (if applicable)
```
/app_venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:132: UserWarning: Field "model_name" in JinaEmbeddings has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/app_venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:132: UserWarning: Field "model_path" in LlamaCppEmbeddings has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/app_venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:132: UserWarning: Field "model_url" in TensorflowHubEmbeddings has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/app_venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:132: UserWarning: Field "model_name" in GooglePalmEmbeddings has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
```
### Description
I am trying to use the langchain library to develop an AI application, however, when importing certain libraries I am getting warnings regarding protected namespaces.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Wed Jul 31 20:48:52 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6020
> Python Version: 3.10.14 (main, May 3 2024, 16:41:18) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.12
> langchain: 0.3.4
> langchain_community: 0.3.3
> langsmith: 0.1.137
> langchain_google_vertexai: 2.0.5
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> google-cloud-aiplatform: 1.70.0
> google-cloud-storage: 2.18.2
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langchain-mistralai: Installed. No version info available.
> numpy: 1.26.4
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.21
> tenacity: 8.5.0
> typing-extensions: 4.12.2 | stale | low | Critical |
2,611,197,144 | next.js | When using router.replace() from next/navigation for routing, the modal opened with the next.js slot does not close. | ### Link to the code that reproduces this issue
https://github.com/2dubbing/nextgram
### To Reproduce
1. npm install
2. npm run dev
3. Open web browser http://localhost:3000
4. Click the square UI with the number on it on the screen.
5. Click the X button in the modal window.
### Current vs. Expected behavior
Current: If you route with router.replace() while the modal window is open, the modal window will not close.
Expected behavior: Routing with router.replace() should close any open modal windows.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Mon Aug 12 20:51:54 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.18.0
npm: 10.8.2
Yarn: N/A
pnpm: 9.6.0
Relevant Packages:
next: 15.0.0-canary.61
react: 19.0.0-rc.0
react-dom: 19.0.0-rc.0
typescript: 5.5.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
You might want to use router.replace for the following reasons
Because we think it's better for usability to use router.replace rather than router.back if the data change is successful on the Change Form Data page.
thank you. | create-next-app,bug | low | Minor |
2,611,214,250 | tauri | [feat] Allow setting window appId dynamically to prevent Windows from combining taskbar buttons | ### Describe the problem
Currently, if you have a multi-windowed app, you can set the icon of each window to a unique icon using `window.setIcon`.
However, Windows will by default combine the taskbar buttons even if they have different icons.
Electron provides an API to get around this. You can do something like:
```
window.setAppDetails({ appId: 'uniqueAppId' });
```
As far as I can tell, there is no equivalent API in Tauri
### Describe the solution you'd like
I would like to have the addition of an API that allows me to control the grouping of taskbar buttons of app windows on Windows.
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,611,250,869 | go | proposal: x/crypto/ssh: read single packets from SSH channels | ### Proposal Details
I propose a new `ssh.Channel` function to read a single SSH packet.
This is required in to inter-opt with some channel types in use by OpenSSH.
For example to inter-opt with the `tun@openssh.com` channel type defined in section 2.3 of the [openssh protocol standard](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL?annotate=HEAD):
> 283: Once established the client and server may exchange packet or frames
284: over the tunnel channel by ***encapsulating them in SSH protocol strings***
285: and sending them as channel data. ***This ensures that packet boundaries
286: are kept intact.*** Specifically, packets are transmitted using normal
287: SSH_MSG_CHANNEL_DATA packets:
It is required that you are able to read each individual SSH packet as it defines the length of each Network packet.
Currently the `Read(...)` function defined by the `ssh.Channel` interface reads all buffered SSH packets.
This requires the implementer of `tun@openssh.com` and other similar channel types to attempt to determine SSH packet size (and thus network packet size) from the data that is returned from `Read(...)`.
This is error prone and in some cases not feasible.
The following are two snippets to show how this may be implemented.
Example addition to `ssh.buffer`:
```go
func (b *buffer) ReadSingle() ([]byte, error) {
sb.Cond.L.Lock()
defer sb.Cond.L.Unlock()
if sb.closed {
return nil, io.EOF
}
if len(sb.head.buf) == 0 && sb.head == sb.tail {
// If we have no messages right now, just wait until we do
sb.Cond.Wait()
if sb.closed {
return nil, io.EOF
}
}
result := make([]byte, len(sb.head.buf))
n := copy(result, sb.head.buf)
sb.head.buf = sb.head.buf[n:]
if sb.head != sb.tail {
sb.head = sb.head.next
}
return result, nil
}
```
Example addition to `ssh.channel` (and thus the `ssh.Channel` interface):
```go
func (c *channel) ReadSSHPacket() ([]byte, error) {
buff, err := m.pending.ReadSingle()
if err != nil {
return nil, err
}
if len(buff) > 0 {
err = c.adjustWindow(uint32(len(buff)))
if len(buff) > 0 && err == io.EOF {
err = nil
}
}
return buff, err
}
```
Additionally, here is a working example of how this must currently be done which uses incredibly brittle reflections.
https://github.com/NHAS/reverse_ssh/blob/f5d2a6cd8562e5f5ff33551aa37285651b90a309/internal/client/handlers/tun.go#L356
| Proposal | low | Critical |
2,611,315,637 | node | Error occurred when version 23.0.0 was compiled on aarch64. | ### Version
23.0.0
### Platform
```text
Linux localhost.localdomain 4.18.0-193.el8.aarch64 #1 SMP Fri May 8 11:05:12 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux
CentOS Linux 8 (Core)
```
### Subsystem
_No response_
### What steps will reproduce the bug?
1、'/home/spack/opt/spack/linux-centos8-aarch64/gcc-8.5.0/python-3.8.8-m3k3pvne56yo7wyd7a3bwf7mugupszik/bin/python3.8' 'configure.py' '--prefix=/home/spack/opt/spack/linux-centos8-aarch64/gcc-8.5.0/node-js-23.0.0-cltmbvkoowr6g5oevhfooh3voetclvdt' '--without-npm' '--shared-openssl' '--shared-openssl-includes=/home/spack/opt/spack/linux-centos8-aarch64/gcc-8.5.0/openssl-1.1.1j-6ukggdjrujsa67n7fgoxui2lywmdbb7s/include' '--shared-openssl-libpath=/home/spack/opt/spack/linux-centos8-aarch64/gcc-8.5.0/openssl-1.1.1j-6ukggdjrujsa67n7fgoxui2lywmdbb7s/lib' '--shared-zlib' '--shared-zlib-includes=/home/spack/opt/spack/linux-centos8-aarch64/gcc-8.5.0/zlib-1.2.11-wclep75baky5gi4gu5erof55jpbtgbgy/include' '--shared-zlib-libpath=/home/spack/opt/spack/linux-centos8-aarch64/gcc-8.5.0/zlib-1.2.11-wclep75baky5gi4gu5erof55jpbtgbgy/lib'
2、make -j16
### How often does it reproduce? Is there a required condition?
inevitable recurrence
### What is the expected behavior? Why is that the expected behavior?
Hope the compilation will be successful
### What do you see instead?
```
/home/spack/opt/spack/linux-centos8-aarch64/gcc-8.5.0/gcc-10.2.0-3dpm635vxyxp5mixbtpecgggf6uvo4ed/bin/g++ -o /home/stage/root/spack-stage-node-js-23.0.0-cltmbvkoowr6g5oevhfooh3vo
etclvdt/spack-src/out/Release/obj.target/v8_base_without_compiler/deps/v8/src/heap/cppgc/heap-state.o ../deps/v8/src/heap/cppgc/heap-state.cc '-D_GLIBCXX_USE_CXX11_ABI=1' '-DNODE_O
PENSSL_CONF_NAME=nodejs_conf' '-DICU_NO_USER_DATA_OVERRIDE' '-DV8_GYP_BUILD' '-DV8_TYPED_ARRAY_MAX_SIZE_IN_HEAP=64' '-D__STDC_FORMAT_MACROS' '-DV8_TARGET_ARCH_ARM64' '-DV8_HAVE_TAR
GET_OS' '-DV8_TARGET_OS_LINUX' '-DV8_EMBEDDER_STRING="-node.10"' '-DENABLE_DISASSEMBLER' '-DV8_PROMISE_INTERNAL_FIELD_COUNT=1' '-DV8_ENABLE_PRIVATE_MAPPING_FORK_OPTIMIZATION' '-DOB
JECT_PRINT' '-DV8_INTL_SUPPORT' '-DV8_ATOMIC_OBJECT_FIELD_WRITES' '-DV8_ENABLE_LAZY_SOURCE_POSITIONS' '-DV8_USE_SIPHASH' '-DV8_SHARED_RO_HEAP' '-DNDEBUG' '-DV8_WIN64_UNWINDING_INFO
' '-DV8_ENABLE_REGEXP_INTERPRETER_THREADED_DISPATCH' '-DV8_USE_ZLIB' '-DV8_ENABLE_SPARKPLUG' '-DV8_ENABLE_MAGLEV' '-DV8_ENABLE_TURBOFAN' '-DV8_ENABLE_WEBASSEMBLY' '-DV8_ENABLE_JAVA
SCRIPT_PROMISE_HOOKS' '-DV8_ENABLE_CONTINUATION_PRESERVED_EMBEDDER_DATA' '-DV8_ALLOCATION_FOLDING' '-DV8_ALLOCATION_SITE_TRACKING' '-DV8_ADVANCED_BIGINT_ALGORITHMS' '-DICU_UTIL_DAT
A_IMPL=ICU_UTIL_DATA_STATIC' '-DUCONFIG_NO_SERVICE=1' '-DU_ENABLE_DYLOAD=0' '-DU_STATIC_IMPLEMENTATION=1' '-DU_HAVE_STD_STRING=1' '-DUCONFIG_NO_BREAK_ITERATION=0' -I/home/spack/opt
/spack/linux-centos8-aarch64/gcc-8.5.0/zlib-1.2.11-wclep75baky5gi4gu5erof55jpbtgbgy/include -I/home/spack/opt/spack/linux-centos8-aarch64/gcc-8.5.0/openssl-1.1.1j-6ukggdjrujsa67n7f
goxui2lywmdbb7s/include -I../deps/v8 -I../deps/v8/include -I/home/stage/root/spack-stage-node-js-23.0.0-cltmbvkoowr6g5oevhfooh3voetclvdt/spack-src/out/Release/obj/gen/inspector-gen
erated-output-root -I../deps/v8/third_party/inspector_protocol -I/home/stage/root/spack-stage-node-js-23.0.0-cltmbvkoowr6g5oevhfooh3voetclvdt/spack-src/out/Release/obj/gen -I/home/
stage/root/spack-stage-node-js-23.0.0-cltmbvkoowr6g5oevhfooh3voetclvdt/spack-src/out/Release/obj/gen/generate-bytecode-output-root -I../deps/icu-small/source/i18n -I../deps/icu-sma
ll/source/common -I../deps/v8/third_party/zlib -I../deps/v8/third_party/zlib/google -I../deps/v8/third_party/abseil-cpp -I../deps/v8/third_party/fp16/src/include -pthread -Wno-unu
sed-parameter -Wno-strict-overflow -Wno-return-type -Wno-int-in-bool-context -Wno-deprecated -Wno-stringop-overflow -Wno-stringop-overread -Wno-restrict -Wno-array-bounds -Wno-nonn
ull -Wno-dangling-pointer -flax-vector-conversions -O3 -fno-omit-frame-pointer -fdata-sections -ffunction-sections -O3 -fno-rtti -fno-exceptions -fno-strict-aliasing -std=gnu++20 -
Wno-invalid-offsetof -MMD -MF /home/stage/root/spack-stage-node-js-23.0.0-cltmbvkoowr6g5oevhfooh3voetclvdt/spack-src/out/Release/.deps//home/stage/root/spack-stage-node-js-23.0.0-c
ltmbvkoowr6g5oevhfooh3voetclvdt/spack-src/out/Release/obj.target/v8_base_without_compiler/deps/v8/src/heap/cppgc/heap-state.o.d.raw -c
52 /tmp/ccxqNBaQ.s: Assembler messages:
>> 53 /tmp/ccxqNBaQ.s:40: Error: unknown architectural extension `memtag'
>> 54 /tmp/ccxqNBaQ.s:40: Error: unknown or missing system register name at operand 2 -- `mrs x0,tco'
>> 55 /tmp/ccxqNBaQ.s:53: Error: unknown architectural extension `memtag'
>> 56 /tmp/ccxqNBaQ.s:53: Error: unknown or missing system register name at operand 1 -- `msr tco,#1'
>> 57 /tmp/ccxqNBaQ.s:93: Error: unknown architectural extension `memtag'
>> 58 /tmp/ccxqNBaQ.s:93: Error: unknown or missing system register name at operand 2 -- `mrs x1,tco'
>> 59 /tmp/ccxqNBaQ.s:101: Error: unknown architectural extension `memtag'
>> 60 /tmp/ccxqNBaQ.s:101: Error: unknown or missing system register name at operand 1 -- `msr tco,#0'
>> 61 make[1]: *** [tools/v8_gypfiles/v8_base_without_compiler.target.mk:1155: /home/stage/root/spack-stage-node-js-23.0.0-cltmbvkoowr6g5oevhfooh3voetclvdt/spack-src/out/Release/obj.targ
et/v8_base_without_compiler/deps/v8/src/heap/base/memory-tagging.o] Error 1
62 make[1]: *** Waiting for unfinished jobs....
63 rm 9a73fd021cd34c952055ffcdff05214794126c99.intermediate d22b0cd6bd107a6086506c4c05dedffdbf3df298.intermediate b1dd6c5273c50333ed03e848efeb16de24103c46.intermediate
64 make: *** [Makefile:137: node] Error 2
```
### Additional information
_No response_ | build | low | Critical |
2,611,333,293 | svelte | Svelte 5: create_in_transition from svelte/internal no longer usable | ### Describe the problem
I have encountered an issue when migrating from Svelte 4 to Svelte 5 where my application fails due to the removal of the svelte/internal module. Specifically, I was using create_in_transition from svelte/internal to implement custom page transitions in a mobile app.
### Error Message:
Error: Your application, or one of its dependencies, imported from 'svelte/internal', which was a private module used by Svelte 4 components that no longer exists in Svelte 5. It is not intended to be public API. If you're a library author and you used 'svelte/internal' deliberately, please raise an issue on https://github.com/sveltejs/svelte/issues detailing your use case.
### Code Snippet:
Here is the code I used in Svelte 4:
```
import { create_in_transition } from "svelte/internal";
let intro = create_in_transition(document.querySelector("#page-" + page), fly, {
x: previousPage < page ? -100 : 100,
duration: 250,
});
intro.start();
```
### Use Case:
I am developing a mobile app where all pages are pre-rendered and fully loaded in the DOM. However, I still want smooth transitions between pages when the user navigates. In Svelte 4, I used the create_in_transition function from svelte/internal to manage these transitions based on the current and previous page.
### Why Standard Transitions Won't Work:
I couldn't use Svelte’s standard transitions (e.g., the transition directive with if blocks) because standard transitions only trigger when elements are added or removed from the DOM. Using them with if implies that the pages would only be loaded when they are displayed, which isn't suitable for my use case. In my app, all pages are already in the DOM, and I want to transition between them without reloading or re-rendering content.
### Describe the proposed solution
Could you please provide guidance on how to handle this use case in Svelte 5, or consider making create_in_transition or an equivalent API available? I understand that svelte/internal was not intended as a public API, but this functionality is crucial for my mobile app’s user experience.
Thank you for your attention!
### Importance
would make my life easier | transition/animation | low | Critical |
2,611,334,114 | pytorch | same data all reduce on H20, but results are different | ### 🐛 Describe the bug
Same data all reduce on H20 and tp8, but the results are different.
The problem can be reproduced on the image:nvcr.io/nvidia/pytorch:24.09-py3
demo code:
``` python
# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 test_all_reduce.py
import logging
import torch
import torch.distributed
from torch.distributed import ReduceOp
def print_rank_0(msg, *args, **kwargs):
rank = torch.distributed.get_rank()
if rank == 0:
logging.info(msg, *args, **kwargs)
def dist_allreduce(idx):
print_rank_0("all_reduce:")
torch.distributed.barrier()
rank = torch.distributed.get_rank()
tensor = torch.load(f"device_{rank}", map_location=f"cuda:{rank}")
torch.distributed.all_reduce(tensor)
if rank == 0:
logging.info(f"all_reduce result {idx} ============== ")
logging.info(f"{tensor.flatten()[:30]}")
torch.distributed.barrier()
def main():
torch.distributed.init_process_group("nccl")
rank = torch.distributed.get_rank()
local_rank = rank % torch.cuda.device_count()
torch.set_default_device(f"cuda:{local_rank}")
for idx in range(10):
dist_allreduce(idx)
if __name__ == "__main__":
logging.basicConfig(format=logging.BASIC_FORMAT, level=logging.INFO)
main()
```
result:

we can see some results diffrent, maybe there is a bug in nccl.
### Versions
PyTorch version: 2.5.0a0+b465a5843b.nv24.09
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.112-005.ali5000.al8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) Platinum 8469C
BIOS Model name: Intel(R) Xeon(R) Platinum 8469C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cudnn-frontend==1.6.0
[pip3] nvidia-nccl-cu12==2.22.3
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.2
[pip3] optree==0.12.1
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.0.0+dedb7bdf3
[pip3] torch==2.5.0a0+b465a5843b.nv24.9
[pip3] torch_tensorrt==2.5.0a0
[pip3] torchvision==0.20.0a0
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,611,361,062 | flutter | [Android][video_player] black borders around video | ### Steps to reproduce
just run an example app
i know that this issue might be duplicate of #72508 , but i don't get how to fix it
its happens only on android, on some devices, for example on Xiaomi MI 13 and POCO F5
### Expected results
no black borders
### Actual results
black borders, which is looks weird
### Code sample
<details open><summary>Code sample</summary>
[example app](https://github.com/JILTB/video_test_issue)
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/34c6e862-dd86-4091-ab68-1ed325f617bc




</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale
en-BG)
• Flutter version 3.24.3 on channel stable at /Users/ivantonev/dev/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (6 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/ivantonev/Library/Android/sdk
• Platform android-35, build-tools 34.0.0
• ANDROID_HOME = /Users/ivantonev/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
[✓] VS Code (version 1.94.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (5 available)
• Ivan’s Iphone (mobile) • 87c7899fc15348534b00a14a5f70a08f0a40805d • ios • iOS 16.7.10 20H350
• Tseh85 (mobile) • 3E67F860-1434-4009-91A8-B03764D84898 • ios •
com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.60
! Error: Apple Watch — Ivan needs to connect to determine its availability. Check the connection between the device and its
companion iPhone, and the connection between the iPhone and Xcode. Both devices may also need to be restarted and unlocked.
(code 1)
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| platform-android,p: video_player,package,has reproducible steps,P2,team-android,triaged-android,found in release: 3.24,found in release: 3.27 | low | Critical |
2,611,397,481 | rust | ICE: `Building async destructor constructor shim is not yet implemented for type: Coroutine` | <!--
[31mICE[0m: Rustc ./a.rs '-Zvalidate-mir --edition=2018 -Zinline-mir=yes -ooutputfile -Zdump-mir-dir=dir' 'error: internal compiler error: compiler/rustc_mir_transform/src/shim/async_destructor_ctor.rs:182:17: Building async destructor constructor shim is not yet implemented for type: Coroutine(DefId(0:16 ~ a[c88c]::test_async_drop::{closure#0}), [i32, (), std::future::ResumeTy, (), (), CoroutineWitness(DefId(0:16 ~ a[c88c]::test_async_drop::{closure#0}), [i32]), (i32,)])', 'error: internal compiler error: compiler/rustc_mir_transform/src/shim/async_destructor_ctor.rs:182:17: Building async destructor constructor shim is not yet implemented for type: Coroutine(DefId(0:16 ~ a[c88c]::test_async_drop::{closure#0}), [i32, (), std::future::ResumeTy, (), (), CoroutineWitness(DefId(0:16 ~ a[c88c]::test_async_drop::{closure#0}), [i32]), (i32,)])'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
//@compile-flags: -Zvalidate-mir --edition=2018 -Zinline-mir=yes
use core::future::{async_drop_in_place, Future};
use core::mem::{self};
use core::pin::pin;
use core::task::{Context, Waker};
async fn test_async_drop<T>(x: T) {
let mut x = mem::MaybeUninit::new(x);
pin!(unsafe { async_drop_in_place(x.as_mut_ptr()) });
}
fn main() {
let waker = Waker::noop();
let mut cx = Context::from_waker(&waker);
let fut = pin!(async {
test_async_drop(test_async_drop(0)).await;
});
fut.poll(&mut cx);
}
````
original:
````rust
use core::mem::{self};
use core::pin::{pin};
use core::task::{Context, Waker};
use core::future::{async_drop_in_place, Future};
async fn test_async_drop<T>(x: T) {
let mut x = mem::MaybeUninit::new(x);
pin!(unsafe { async_drop_in_place(x.as_mut_ptr()) });
}
fn main() {
let waker = Waker::noop();
let mut cx = Context::from_waker(&waker);
let fut = pin!(async {
test_async_drop(test_async_drop(0)).await;
});
fut.poll(&mut cx);
}
````
Version information
````
rustc 1.84.0-nightly (8aca4bab0 2024-10-24)
binary: rustc
commit-hash: 8aca4bab080b2c81065645fc070acca7a060f8a3
commit-date: 2024-10-24
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/8aca4bab080b2c81065645fc070acca7a060f8a3/compiler/rustc_mir_transform/src/shim/async_destructor_ctor.rs#L176-L188
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zvalidate-mir --edition=2018 -Zinline-mir=yes`
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0658]: use of unstable library feature 'async_drop'
--> /tmp/icemaker_global_tempdir.DLGUrsC6bQJS/rustc_testrunner_tmpdir_reporting.XPxMHGh28ax0/mvce.rs:1:20
|
1 | use core::future::{async_drop_in_place, Future};
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #126482 <https://github.com/rust-lang/rust/issues/126482> for more information
= help: add `#![feature(async_drop)]` to the crate attributes to enable
= note: this compiler was built on 2024-10-24; consider upgrading it if it is out of date
error[E0658]: use of unstable library feature 'async_drop'
--> /tmp/icemaker_global_tempdir.DLGUrsC6bQJS/rustc_testrunner_tmpdir_reporting.XPxMHGh28ax0/mvce.rs:8:19
|
8 | pin!(unsafe { async_drop_in_place(x.as_mut_ptr()) });
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #126482 <https://github.com/rust-lang/rust/issues/126482> for more information
= help: add `#![feature(async_drop)]` to the crate attributes to enable
= note: this compiler was built on 2024-10-24; consider upgrading it if it is out of date
error[E0658]: use of unstable library feature 'noop_waker'
--> /tmp/icemaker_global_tempdir.DLGUrsC6bQJS/rustc_testrunner_tmpdir_reporting.XPxMHGh28ax0/mvce.rs:12:17
|
12 | let waker = Waker::noop();
| ^^^^^^^^^^^
|
= note: see issue #98286 <https://github.com/rust-lang/rust/issues/98286> for more information
= help: add `#![feature(noop_waker)]` to the crate attributes to enable
= note: this compiler was built on 2024-10-24; consider upgrading it if it is out of date
error: internal compiler error: compiler/rustc_mir_transform/src/shim/async_destructor_ctor.rs:182:17: Building async destructor constructor shim is not yet implemented for type: Coroutine(DefId(0:15 ~ mvce[b792]::test_async_drop::{closure#0}), [i32, (), std::future::ResumeTy, (), (), CoroutineWitness(DefId(0:15 ~ mvce[b792]::test_async_drop::{closure#0}), [i32]), (i32,)])
thread 'rustc' panicked at compiler/rustc_mir_transform/src/shim/async_destructor_ctor.rs:182:17:
Box<dyn Any>
stack backtrace:
0: 0x70ab7668515a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h67a760f142b10089
1: 0x70ab76e041ca - core::fmt::write::h761a6181051e2338
2: 0x70ab780aaf11 - std::io::Write::write_fmt::h51dcb9980640e286
3: 0x70ab76684fb2 - std::sys::backtrace::BacktraceLock::print::h699259bafadc78e2
4: 0x70ab76687496 - std::panicking::default_hook::{{closure}}::h26f90f180373fe14
5: 0x70ab766872e0 - std::panicking::default_hook::hc56aa4946c4cfd81
6: 0x70ab75711a5f - std[154c2b8f5633419d]::panicking::update_hook::<alloc[b3c0c984dc0f1f14]::boxed::Box<rustc_driver_impl[1713c7a502794333]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x70ab76687ba8 - std::panicking::rust_panic_with_hook::hc7e3b32b38224be2
8: 0x70ab7574b7c1 - std[154c2b8f5633419d]::panicking::begin_panic::<rustc_errors[68da6a21905bba74]::ExplicitBug>::{closure#0}
9: 0x70ab7573e766 - std[154c2b8f5633419d]::sys::backtrace::__rust_end_short_backtrace::<std[154c2b8f5633419d]::panicking::begin_panic<rustc_errors[68da6a21905bba74]::ExplicitBug>::{closure#0}, !>
10: 0x70ab75739d69 - std[154c2b8f5633419d]::panicking::begin_panic::<rustc_errors[68da6a21905bba74]::ExplicitBug>
11: 0x70ab75755331 - <rustc_errors[68da6a21905bba74]::diagnostic::BugAbort as rustc_errors[68da6a21905bba74]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x70ab75dc1ae4 - rustc_middle[7ecdc0b6d8402894]::util::bug::opt_span_bug_fmt::<rustc_span[9923c3b3311dc018]::span_encoding::Span>::{closure#0}
13: 0x70ab75da7f8a - rustc_middle[7ecdc0b6d8402894]::ty::context::tls::with_opt::<rustc_middle[7ecdc0b6d8402894]::util::bug::opt_span_bug_fmt<rustc_span[9923c3b3311dc018]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x70ab75da7e1b - rustc_middle[7ecdc0b6d8402894]::ty::context::tls::with_context_opt::<rustc_middle[7ecdc0b6d8402894]::ty::context::tls::with_opt<rustc_middle[7ecdc0b6d8402894]::util::bug::opt_span_bug_fmt<rustc_span[9923c3b3311dc018]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x70ab73d738c0 - rustc_middle[7ecdc0b6d8402894]::util::bug::bug_fmt
16: 0x70ab77834bd5 - rustc_mir_transform[757de5d5b0d4bfad]::shim::make_shim
17: 0x70ab778317af - rustc_query_impl[5d234f7b7f31cffe]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[5d234f7b7f31cffe]::query_impl::mir_shims::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 8usize]>>
18: 0x70ab7783176f - <rustc_query_impl[5d234f7b7f31cffe]::query_impl::mir_shims::dynamic_query::{closure#2} as core[f763a1f5684efb66]::ops::function::FnOnce<(rustc_middle[7ecdc0b6d8402894]::ty::context::TyCtxt, rustc_middle[7ecdc0b6d8402894]::ty::instance::InstanceKind)>>::call_once
19: 0x70ab7722dc74 - rustc_query_system[f9c49b141f9e0de3]::query::plumbing::try_execute_query::<rustc_query_impl[5d234f7b7f31cffe]::DynamicConfig<rustc_query_system[f9c49b141f9e0de3]::query::caches::DefaultCache<rustc_middle[7ecdc0b6d8402894]::ty::instance::InstanceKind, rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[5d234f7b7f31cffe]::plumbing::QueryCtxt, false>
20: 0x70ab7722d9cb - rustc_query_impl[5d234f7b7f31cffe]::query_impl::mir_shims::get_query_non_incr::__rust_end_short_backtrace
21: 0x70ab740dfb45 - <rustc_middle[7ecdc0b6d8402894]::ty::context::TyCtxt>::instance_mir
22: 0x70ab7781295c - rustc_mir_transform[757de5d5b0d4bfad]::inline::cycle::mir_inliner_callees
23: 0x70ab77812828 - rustc_query_impl[5d234f7b7f31cffe]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[5d234f7b7f31cffe]::query_impl::mir_inliner_callees::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 16usize]>>
24: 0x70ab778127e7 - <rustc_query_impl[5d234f7b7f31cffe]::query_impl::mir_inliner_callees::dynamic_query::{closure#2} as core[f763a1f5684efb66]::ops::function::FnOnce<(rustc_middle[7ecdc0b6d8402894]::ty::context::TyCtxt, rustc_middle[7ecdc0b6d8402894]::ty::instance::InstanceKind)>>::call_once
25: 0x70ab7722ceb4 - rustc_query_system[f9c49b141f9e0de3]::query::plumbing::try_execute_query::<rustc_query_impl[5d234f7b7f31cffe]::DynamicConfig<rustc_query_system[f9c49b141f9e0de3]::query::caches::DefaultCache<rustc_middle[7ecdc0b6d8402894]::ty::instance::InstanceKind, rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[5d234f7b7f31cffe]::plumbing::QueryCtxt, false>
26: 0x70ab7722cbff - rustc_query_impl[5d234f7b7f31cffe]::query_impl::mir_inliner_callees::get_query_non_incr::__rust_end_short_backtrace
27: 0x70ab77c7ae93 - rustc_mir_transform[757de5d5b0d4bfad]::inline::cycle::mir_callgraph_reachable::process
28: 0x70ab77c7886a - rustc_mir_transform[757de5d5b0d4bfad]::inline::cycle::mir_callgraph_reachable::process
29: 0x70ab77c7886a - rustc_mir_transform[757de5d5b0d4bfad]::inline::cycle::mir_callgraph_reachable::process
30: 0x70ab77c7886a - rustc_mir_transform[757de5d5b0d4bfad]::inline::cycle::mir_callgraph_reachable::process
31: 0x70ab77c76788 - rustc_mir_transform[757de5d5b0d4bfad]::inline::cycle::mir_callgraph_reachable
32: 0x70ab77c76629 - rustc_query_impl[5d234f7b7f31cffe]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[5d234f7b7f31cffe]::query_impl::mir_callgraph_reachable::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 1usize]>>
33: 0x70ab77c765eb - <rustc_query_impl[5d234f7b7f31cffe]::query_impl::mir_callgraph_reachable::dynamic_query::{closure#2} as core[f763a1f5684efb66]::ops::function::FnOnce<(rustc_middle[7ecdc0b6d8402894]::ty::context::TyCtxt, (rustc_middle[7ecdc0b6d8402894]::ty::instance::Instance, rustc_span[9923c3b3311dc018]::def_id::LocalDefId))>>::call_once
34: 0x70ab77c75fa6 - rustc_query_system[f9c49b141f9e0de3]::query::plumbing::try_execute_query::<rustc_query_impl[5d234f7b7f31cffe]::DynamicConfig<rustc_query_system[f9c49b141f9e0de3]::query::caches::DefaultCache<(rustc_middle[7ecdc0b6d8402894]::ty::instance::Instance, rustc_span[9923c3b3311dc018]::def_id::LocalDefId), rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[5d234f7b7f31cffe]::plumbing::QueryCtxt, false>
35: 0x70ab77c75cf4 - rustc_query_impl[5d234f7b7f31cffe]::query_impl::mir_callgraph_reachable::get_query_non_incr::__rust_end_short_backtrace
36: 0x70ab777ee11d - <rustc_mir_transform[757de5d5b0d4bfad]::inline::Inliner>::try_inlining
37: 0x70ab7780a42b - <rustc_mir_transform[757de5d5b0d4bfad]::inline::Inliner>::process_blocks
38: 0x70ab77809937 - <rustc_mir_transform[757de5d5b0d4bfad]::inline::Inline as rustc_mir_transform[757de5d5b0d4bfad]::pass_manager::MirPass>::run_pass
39: 0x70ab76e0c1cd - rustc_mir_transform[757de5d5b0d4bfad]::pass_manager::run_passes_inner
40: 0x70ab7780fdae - rustc_mir_transform[757de5d5b0d4bfad]::optimized_mir
41: 0x70ab7780e65b - rustc_query_impl[5d234f7b7f31cffe]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[5d234f7b7f31cffe]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 8usize]>>
42: 0x70ab76e271ee - rustc_query_system[f9c49b141f9e0de3]::query::plumbing::try_execute_query::<rustc_query_impl[5d234f7b7f31cffe]::DynamicConfig<rustc_query_system[f9c49b141f9e0de3]::query::caches::DefIdCache<rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[5d234f7b7f31cffe]::plumbing::QueryCtxt, false>
43: 0x70ab76e26773 - rustc_query_impl[5d234f7b7f31cffe]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
44: 0x70ab740dfbdf - <rustc_middle[7ecdc0b6d8402894]::ty::context::TyCtxt>::instance_mir
45: 0x70ab7723328e - rustc_interface[8a226d11e1d62432]::passes::run_required_analyses
46: 0x70ab77989a9e - rustc_interface[8a226d11e1d62432]::passes::analysis
47: 0x70ab77989a71 - rustc_query_impl[5d234f7b7f31cffe]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[5d234f7b7f31cffe]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 1usize]>>
48: 0x70ab77d2cbae - rustc_query_system[f9c49b141f9e0de3]::query::plumbing::try_execute_query::<rustc_query_impl[5d234f7b7f31cffe]::DynamicConfig<rustc_query_system[f9c49b141f9e0de3]::query::caches::SingleCache<rustc_middle[7ecdc0b6d8402894]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[5d234f7b7f31cffe]::plumbing::QueryCtxt, false>
49: 0x70ab77d2c88f - rustc_query_impl[5d234f7b7f31cffe]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
50: 0x70ab77bd1eac - rustc_interface[8a226d11e1d62432]::interface::run_compiler::<core[f763a1f5684efb66]::result::Result<(), rustc_span[9923c3b3311dc018]::ErrorGuaranteed>, rustc_driver_impl[1713c7a502794333]::run_compiler::{closure#0}>::{closure#1}
51: 0x70ab77c47cd4 - std[154c2b8f5633419d]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8a226d11e1d62432]::util::run_in_thread_with_globals<rustc_interface[8a226d11e1d62432]::util::run_in_thread_pool_with_globals<rustc_interface[8a226d11e1d62432]::interface::run_compiler<core[f763a1f5684efb66]::result::Result<(), rustc_span[9923c3b3311dc018]::ErrorGuaranteed>, rustc_driver_impl[1713c7a502794333]::run_compiler::{closure#0}>::{closure#1}, core[f763a1f5684efb66]::result::Result<(), rustc_span[9923c3b3311dc018]::ErrorGuaranteed>>::{closure#0}, core[f763a1f5684efb66]::result::Result<(), rustc_span[9923c3b3311dc018]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[f763a1f5684efb66]::result::Result<(), rustc_span[9923c3b3311dc018]::ErrorGuaranteed>>
52: 0x70ab77c480e8 - <<std[154c2b8f5633419d]::thread::Builder>::spawn_unchecked_<rustc_interface[8a226d11e1d62432]::util::run_in_thread_with_globals<rustc_interface[8a226d11e1d62432]::util::run_in_thread_pool_with_globals<rustc_interface[8a226d11e1d62432]::interface::run_compiler<core[f763a1f5684efb66]::result::Result<(), rustc_span[9923c3b3311dc018]::ErrorGuaranteed>, rustc_driver_impl[1713c7a502794333]::run_compiler::{closure#0}>::{closure#1}, core[f763a1f5684efb66]::result::Result<(), rustc_span[9923c3b3311dc018]::ErrorGuaranteed>>::{closure#0}, core[f763a1f5684efb66]::result::Result<(), rustc_span[9923c3b3311dc018]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[f763a1f5684efb66]::result::Result<(), rustc_span[9923c3b3311dc018]::ErrorGuaranteed>>::{closure#1} as core[f763a1f5684efb66]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
53: 0x70ab77c48bab - std::sys::pal::unix::thread::Thread::new::thread_start::hef945dd3992d59fc
54: 0x70ab71ea339d - <unknown>
55: 0x70ab71f2849c - <unknown>
56: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.84.0-nightly (8aca4bab0 2024-10-24) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z validate-mir -Z inline-mir=yes -Z dump-mir-dir=dir
query stack during panic:
#0 [mir_shims] generating MIR shim for `core::future::async_drop::async_drop_in_place_raw`
#1 [mir_inliner_callees] computing all local function calls in `core::future::async_drop::async_drop_in_place_raw`
end of query stack
error: aborting due to 4 previous errors
For more information about this error, try `rustc --explain E0658`.
```
</p>
</details>
<!--
query stack:
#0 [mir_shims] generating MIR shim for `core::future::async_drop::async_drop_in_place_raw`
#1 [mir_inliner_callees] computing all local function calls in `core::future::async_drop::async_drop_in_place_raw`
-->
| I-ICE,A-destructors,T-compiler,C-bug,A-coroutines,A-async-await,-Zvalidate-mir,S-bug-has-test | low | Critical |
2,611,413,895 | kubernetes | Kubectl exec disconnects automatically after 5m post upgrading the k8s cluster to 1.30 | ### What happened?
We have several automated scripts that run kubectl commands to exec into the pods and execute some custom scripts scripts. We observed that on all clusters running version 1.30.x, the session automatically gets disconnected without any error message, which was not the case in versions lower than 1.30.
### What did you expect to happen?
Session should not terminate until we disconnect or exit from the pod
### How can we reproduce it (as minimally and precisely as possible)?
Just run a script which does kubectl exec into the pod and you will see that it gets disconnected with in 5 min
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
Client Version: v1.30.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.3
</details>
### Cloud provider
<details>
Azure
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,kind/support,sig/cli,needs-triage | medium | Critical |
2,611,417,889 | node | EALREADY when reconnecting socket after destroying immediately after connecting | ### Version
v22.10.0
### Platform
```text
Darwin xxx.local 23.6.0 Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:21 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T8103 arm64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
```
import { Socket } from 'node:net';
const socket = new Socket();
socket.on('error', (err) => {
console.log(err);
});
socket.on('connect', () => {
console.log('connected');
});
socket.connect({ host: 'google.com', port: 80 });
socket.once('close', () => {
console.log('closed');
socket.connect({ host: 'google.com', port: 80 });
});
socket.destroy();
```
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior? Why is that the expected behavior?
Seeing
```
closed
connected
```
in the console.
### What do you see instead?
```
closed
Error: connect EALREADY 142.250.179.174:2404 - Local (192.168.178.178:61323)
at internalConnect (node:net:1097:16)
at defaultTriggerAsyncIdScope (node:internal/async_hooks:464:18)
at GetAddrInfoReqWrap.emitLookup [as callback] (node:net:1496:9)
at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:132:8) {
errno: -37,
code: 'EALREADY',
syscall: 'connect',
address: '142.250.179.174',
port: 2404
}
```
### Additional information
Calling `socket.destroySoon();` or `socket.end();` does work as intended, but IMO `socket.destroy();` should also allow a reconnect immediately afterwards. Note that calling `socket.destroy();` on a socket that has been connected does allow for immediate reconnect afterwards as expected. | net | low | Critical |
2,611,439,562 | vscode | Add flag to disable welcome page on first start | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Right now the welcome page will always be shown as can be seen by this:
https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/welcomeGettingStarted/browser/startupPage.ts#L108
https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/welcomeGettingStarted/browser/startupPage.ts#L121
Would it be possible to add a flag to disable this? | feature-request,workbench-welcome | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.