id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,619,130,501 | pytorch | `tensor` not a `FakeTensor` under `FakeTensorMode` and `device('meta')` | ### 🐛 Describe the bug
```python
import torch
from torch._subclasses.fake_tensor import FakeTensorMode
with FakeTensorMode(), torch.device("meta"):
a = torch.tensor(3.0) # not a FakeTensor
b = torch.full(size=tuple(), fill_value=3.0) # workaround
print(a) # tensor(..., device='meta', size=())
print(b) # FakeTensor(..., device='meta', size=())
```
You would expect both to become `FakeTensor`s
This issue results in a downstream assertion error https://github.com/pytorch/pytorch/blob/01b055abe3669f036a60da6158ab778b0700baaf/torch/_subclasses/fake_tensor.py#L2230-L2233 in scripts that try to use this tech
### Versions
`torch==2.6.0.dev20241018+cu118`
cc @ezyang @chauhang @penguinwu @eellison @zou3519 @bdhirsh @yf225 | triaged,actionable,oncall: pt2,module: fakeTensor,module: pt2-dispatcher | low | Critical |
2,619,140,311 | rust | Use `rustc` attrs in `dangling_pointers_from_temporaries` lint | > > If you want to do it in this PR (otherwise in a follow-up), we could add an `rustc` attribute to those `as_ptr`/`as_mut_ptr` methods.
>
> You can follow https://github.com/rust-lang/rust/pull/132146/commits/afcb09b8c765bf2c5b1362b433725f57af51aaad and https://github.com/rust-lang/rust/pull/132146/commits/2a930d3818d1f4dcaf7a06ed7fc12c102268755d (from a PR of mine) with `check_applied_to_fn_or_method` instead, and something similar to this to check the attribute https://github.com/rust-lang/rust/blob/e454c45f1397e382cfb59f0b8970445c8efa875f/compiler/rustc_lint/src/ptr_nulls.rs#L49-L50
_Originally posted by @Urgau in https://github.com/rust-lang/rust/pull/128985#discussion_r1818068264_
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"gavincrawford"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | E-easy,C-cleanup,A-lints,E-mentor,T-compiler | medium | Major |
2,619,146,007 | flutter | Update minimum recommended CocoaPods version to support Xcode 16 synchronized groups | https://github.com/CocoaPods/Xcodeproj/pull/985 fixed critical CocoaPods issue https://github.com/CocoaPods/CocoaPods/issues/12456 which impacted adding new iOS and macOS targets (like test targets and extensions) and add-to-app customers.
The fix is in CocoaPods [1.16.2](https://github.com/CocoaPods/CocoaPods/releases/tag/1.16.2).
- [x] Upload new 1.16.2 CocoaPods cipd package. https://github.com/flutter/cocoon/pull/4010
- [ ] Update CocoaPods cipd version in packages CI
- [x] Update CocoaPods cipd version in flutter CI https://github.com/flutter/flutter/pull/136929
- [x] Update minimum recommended version. https://github.com/flutter/flutter/pull/158206 https://github.com/flutter/flutter/blob/ab256e5caff0f3ba5c8d0b3375697344e7a1b79f/packages/flutter_tools/lib/src/macos/cocoapods.dart#L77-L78
- [x] Update version on website https://github.com/flutter/website/pull/11346
- [ ] Cherry-pick to beta. See https://github.com/flutter/flutter/pull/157022.
See related https://github.com/flutter/flutter/issues/133584. | platform-ios,tool,platform-mac,P1,fyi-infra,team-ios,triaged-ios | medium | Minor |
2,619,250,990 | ui | [bug]: unable to resolve dependency tree for Command componnent NextJS | ### Describe the bug
When using `npx shadcn@latest add command` in a nextjs 15 project, the dependency tree is not met:
`Command failed with exit code 1: npm install cmdk@1.0.0 @radix-ui/react-dialog
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: not-another-dashboard@0.1.0
npm ERR! Found: react@19.0.0-rc-69d4b800-20241021
npm ERR! node_modules/react
npm ERR! react@"19.0.0-rc-69d4b800-20241021" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer react@"^18.0.0" from cmdk@1.0.0
npm ERR! node_modules/cmdk
npm ERR! cmdk@"1.0.0" from the root project`
### Affected component/components
Command
### How to reproduce
install nextjs 15.
install shadcn
try to install the Command component
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
npx shadcn@latest add command
✔ Checking registry.
⠦ Installing dependencies.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Command failed with exit code 1: npm install cmdk@1.0.0 @radix-ui/react-dialog
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: not-another-dashboard@0.1.0
npm ERR! Found: react@19.0.0-rc-69d4b800-20241021
npm ERR! node_modules/react
npm ERR! react@"19.0.0-rc-69d4b800-20241021" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer react@"^18.0.0" from cmdk@1.0.0
npm ERR! node_modules/cmdk
npm ERR! cmdk@"1.0.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
```
### System Info
```bash
Mac OS
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,619,278,881 | excalidraw | feature request: quick color selection using number keys (1-5) while using the drawing tool | i would like to request a feature that allows users to quickly select colors using the number keys (1-5) while using the drawing tool in excalidraw.
**current workflow issue**:
- the current flow of changing tools is messy. for instance, when i enter color select mode using 's', but then try to select a color via the 'asdfg' hotkeys, it changes tools after 2-3 clicks on the color. this disrupts my drawing flow and makes it frustrating to switch colors efficiently.
**use case**:
- implementing a quick color selection method would enhance my workflow, especially in zen mode where i prefer a more focused experience.
**proposed solution**:
- allow users to assign specific colors to the number keys (1-5) for quick access while drawing, without changing the drawing tool.
- this way, i can easily change colors without interrupting my drawing flow.
thank you for considering this request!
| enhancement,shortcuts | low | Major |
2,619,306,606 | excalidraw | feature request: voice annotations in excalidraw | i would like to propose a feature that allows users to add voice annotations directly to their drawings in excalidraw.
#### current limitation:
currently, users must rely on text notes to convey ideas and context within their drawings. this can be time-consuming and may not effectively capture the nuances of spoken explanations, especially when trying to convey complex concepts or emotional nuances.
#### use case:
the ability to record and attach voice annotations would greatly enhance the usability of excalidraw for educators, students, and professionals. for instance, a teacher could record a brief explanation of a diagram while creating it, providing students with immediate context. similarly, a team could record discussions about specific elements of a project as they design collaboratively, ensuring that everyone is on the same page.
#### proposed solution:
- integrate a voice recording feature that allows users to record audio directly within the excalidraw interface.
- users should be able to attach these audio notes to specific elements or areas of their drawings.
- provide playback controls for users to listen to their recordings while reviewing or presenting their drawings.
#### why this matters:
implementing voice annotations would significantly improve the user experience by combining visual and auditory elements in a single workspace. this is particularly beneficial for:
- **enhanced learning**: voice annotations can provide additional context that text alone cannot convey, aiding comprehension and retention for students.
- **dynamic presentations**: presenters can use voice annotations to explain their diagrams or ideas more effectively, creating a richer experience for their audience.
- **accessibility**: voice notes can help those who may struggle with writing or typing to express their ideas more freely.
by adding this feature, excalidraw can become an even more powerful tool for interactive learning and collaboration, making it appealing to a broader audience.
thank you for considering this request! i believe that voice annotations would greatly enhance the functionality of excalidraw and improve the overall user experience. | enhancement | low | Minor |
2,619,306,947 | rust | Document what a "dangling pointer" is | ## Location
There are several places this applies, e.g.:
- [`std::ptr::dangling`](https://doc.rust-lang.org/nightly/std/ptr/fn.dangling.html)
- [`std::ptr::from_ref`](https://doc.rust-lang.org/nightly/std/ptr/fn.from_ref.html) *(in the examples)*
- `dangling_pointers_from_temporaries` lint
## Summary
We have several place in the standard library or as part of lints where we use the term "dangling pointer" without defining it anywhere.
We should have a place where we define what that is, ideally in the standard library, in it's own section, maybe under `std::ptr`, so it can be referenced elsewhere.
It should probably have a definition, followed by some bad examples (and maybe how to fix them).
[Wikipedia has article](https://en.wikipedia.org/wiki/Dangling_pointer) on the subject, *I don't know if it could serve as a base for our documentation but it may be worth checking it.*
## Steps
- [ ] Add std documentation
- [ ] Update existing std docs
- [ ] Update lints diagnostics
| E-hard,E-help-wanted,A-docs,T-libs,T-opsem | medium | Major |
2,619,341,257 | flutter | [flutter_svg] Bring up web tests | Currently `flutter_svg` unit tests don't compile on web due to using non-web-only golden methods in a small number of tests. For now I've opted out the entire package from web Dart unit tests via `dart_test.yaml`, because refactoring is out of scope for the initial import, but we should make it so that the tests compile on web, and only the golden tests don't *run* on web, and remove `dart_test.yaml`, so we are testing all the platforms the package supports.
`rfw` had the same problem, and solved it [like this](https://github.com/flutter/packages/blob/85c4934bda545beff36133dc63e47de5b5c5c56b/packages/rfw/test/material_widgets_test.dart#L12-L13). | a: tests,team,package,P2,team-engine,triaged-engine,p: flutter_svg | low | Minor |
2,619,351,842 | godot | Sequential Input.parse_input_event() calls ignore all previous ones when checking Input.is_input_pressed() | ### Tested versions
4.3 stable-official
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 32.0.15.6590) - Intel(R) Core(TM) i7-14700K (28 Threads)
### Issue description
When simulating input by generating InputEventAction in code, if two or more sequential actions are trying to be sent within the same function, only the last one will be read as pressed in `Input.is_action_pressed()`.
### Steps to reproduce
```
var ev_left = InputEventAction.new()
ev_left.action = "push_left"
var ev_right = InputEventAction.new()
ev_right.action = "push_right"
ev_left.pressed = true
ev_right.pressed = true
Input.parse_input_event(ev_left)
Input.parse_input_event(ev_right)
print("push_left %s, push_right %s" % [ Input.is_action_pressed("push_left"), Input.is_action_pressed("push_right")] )
# Will print: push_left false, push_right true
```
Expected: both `is_action_pressed()` calls return true.
Simply run the included project and you will see two Label sprites setting visibility based on the code above.
### Minimal reproduction project (MRP)
[parse_input_event_issue.zip](https://github.com/user-attachments/files/17547826/parse_input_event_issue.zip)
| bug,needs testing,topic:input | low | Minor |
2,619,361,280 | godot | Converting (one particular project) from 4.2 to 4.3 hangs for a long time (possibly on string_name.cpp, possibly related to global shader params) | ### Tested versions
- Observed in v4.3.stable.official [77dcf97d8]
### System information
Windows 10 - v4.3.stable - Vulkan Forward+
### Issue description
Issue happened when converting a fairly large project, from 4.2 (stable) to 4.3 (stable). I successfully converted other projects, only one project caused the issue.
When opening the project, Godot shows the importing progress bars as it (re)imports resources. Eventually, it freezes, always in the same point. Windows shows it not responding.
I removed the file at the point it was hanging (in case the file itself was the issue), and it made no difference - it instead would hang on another file at that point of the progress, so it was probably showing the name of the last imported file, and not the name causing the issue.
When I close the _console_ window (launching Godot via the Godot_..._console.exe), said console outputs the content below quickly before closing (it's so quick I had to record the screen, pause it and re-type. I double checked for typos but still possible).
```
ERROR: BUG: Unreferenced static string to 0: CVTT Compress
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: luminance_buffers
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: quarter_texture
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: sky_buffers
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: half_texture
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: rb_ssil
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: reflection
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: rb_ssao
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: final
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: back_color
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: back_depth
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: normal_roughness
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: VRS
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: Fog
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: sdfgi
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: _compression
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: _skip_save
at: unref (core/string/string_name.cpp:127)
ERROR: BUG: Unreferenced static string to 0: @export_range
at: unref (core/string/string_name.cpp:129)
ERROR: BUG: Unreferenced static string to 0: @export_multiline
at: unref (core/string/string_name.cpp:129)
ERROR: BUG: Unreferenced static string to 0: @export_enum
at: unref (core/string/string_name.cpp:129)
ERROR: BUG: Unreferenced static string to 0: @export
at: unref (core/string/string_name.cpp:129)
ERROR: BUG: Unreferenced static string to 0: @tool
at: unref (core/string/string_name.cpp:129)
ERROR: BUG: Unreferenced static string to 0: @warning_ignore
at: unref (core/string/string_name.cpp:129)
ERROR: BUG: Unreferenced static string to 0: @icon
at: unref (core/string/string_name.cpp:129)
ERROR: Pages in use exist at exit in PagedAllocator: N7Variant5Pools11BucketLargeE
at: ~PagedAllocator (./core/templates/paged_allocator.h:170)
```
I left it hanging for some good 20 minutes, and it output the below instead, and opened successfully:
```
WARNING: OBJ: Ambient light for material 'Material' is ignored in PBR
at: _parse_material_library (editor/import/3d/resource_importer_obj.cpp:65)
WARNING: Ignoring face with non-finite normal in LOD generation.
at: generate_lods (scene/resources/3d/importer_mesh.cpp:521)
ERROR: Condition "!unique_ids.has(p_id)" is true. Returning: String()
at: get_id_path (core/io/resource_uid.cpp:132)
ERROR: Condition "!unique_ids.has(p_id)" is true. Returning: String()
at: get_id_path (core/io/resource_uid.cpp:132)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "_get_font_data(p_fonts[i])" is null.
at: _shaped_text_add_string (modules/text_server_adv/text_server_adv.cpp:4307)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "_get_font_data(p_fonts[i])" is null.
at: _shaped_text_add_string (modules/text_server_adv/text_server_adv.cpp:4307)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "_get_font_data(p_fonts[i])" is null.
at: _shaped_text_add_string (modules/text_server_adv/text_server_adv.cpp:4307)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "_get_font_data(p_fonts[i])" is null.
at: _shaped_text_add_string (modules/text_server_adv/text_server_adv.cpp:4307)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "_get_font_data(p_fonts[i])" is null.
at: _shaped_text_add_string (modules/text_server_adv/text_server_adv.cpp:4307)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "_get_font_data(p_fonts[i])" is null.
at: _shaped_text_add_string (modules/text_server_adv/text_server_adv.cpp:4307)
ERROR: Parameter "fd" is null.
at: _font_get_ascent (modules/text_server_adv/text_server_adv.cpp:2608)
ERROR: Parameter "fd" is null.
at: _font_get_descent (modules/text_server_adv/text_server_adv.cpp:2640)
ERROR: Parameter "fd" is null.
at: _font_get_supported_chars (modules/text_server_adv/text_server_adv.cpp:3500)
ERROR: Parameter "_get_font_data(p_fonts[i])" is null.
at: _shaped_text_add_string (modules/text_server_adv/text_server_adv.cpp:4307)
ERROR: Parameter "fd" is null.
at: _font_get_supported_chars (modules/text_server_adv/text_server_adv.cpp:3500)
ERROR: Parameter "_get_font_data(p_fonts[i])" is null.
at: _shaped_text_add_string (modules/text_server_adv/text_server_adv.cpp:4307)
res://objects/stephanie/wth/stephanie_wth_anime_hair_color_mul.png: Texture detected as used in 3D. Enabling mipmap generation and setting the texture compression mode to VRAM Compressed (S3TC/ETC/BPTC).
res://objects/stephanie/wth/stephanie_wth_stephanie1_body_rough.png: Texture detected as used in 3D. Enabling mipmap generation and setting the texture compression mode to VRAM Compressed (S3TC/ETC/BPTC).
```
This was only shown the time it successfully finished importing.
Since I see the error messages (the `ERROR: BUG: Unreferenced static string` ones at `core/string/string_name.cpp`) making reference to shader keywords and exported properties, it might be relevant to mention I do have global shader parameters set for that project, and it's the only project for which I have those.
Now the project opens normally, but it _still_ prints the same error messages (the "ERROR BUG" at `string_name.cpp`) every time I close the editor for that project.
### Steps to reproduce
(I don't know how to reproduce. It happens for one project only.)
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing | low | Critical |
2,619,417,182 | go | cmd/go: document interaction of go.work with GODEBUG | We recently encountered this confusing scenario when working in the x/tools repository
- In order to use a go.work file to work on gopls (which has go 1.23.1 in its go.mod file), one has to have at least go1.23.1 in the go.work file.
- x/tools assumes that it's default GODEBUG behavior implies gotypesalias=0, and has tests such ./internal/aliases.TestNewAlias that assume this.
Therefore, there's no way to have a go.work file for which both x/tools and x/tools/gopls tests pass. This is a problem in x/tools, because these tests also fail when run from a different main module that uses go1.23 (see #70082).
Nevertheless, [go.dev/blog/compat](http://go.dev/blog/compat) says the following:
> A program’s GODEBUG settings are configured to match the Go version listed in the main package’s go.mod file. If your program’s go.mod file says go 1.20 and you update to a Go 1.21 toolchain, any GODEBUG-controlled behaviors changed in Go 1.21 will retain their old Go 1.20 behavior until you change the go.mod to say go 1.21.
We should update that documentation to explain how godebug interacts with go.work files.
CC @timothy-king @matloob @samthanawalla | Documentation,NeedsInvestigation | low | Critical |
2,619,422,089 | rust | `assembler label '' cannot be undefined` on Windows ARM |
Whenever I try compiling [turborepo](https://github.com/vercel/turborepo) on Windows ARM, I get the following error when rustc tries to build tokio:
```rust
error: assembler label '' can not be undefined
```
I'd expect this code to compile, since it works fine on other platforms (macOS/Linux/Windows x86).
### Meta
`rustc --version --verbose`:
```
rustc 1.82.0-nightly (0d634185d 2024-08-29)
binary: rustc
commit-hash: 0d634185dfddefe09047881175f35c65d68dcff1
commit-date: 2024-08-29
host: aarch64-pc-windows-msvc
release: 1.82.0-nightly
LLVM version: 19.1.0
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
I don't seem to get a backtrace on Windows, even when setting via `set RUST_BACKTRACE=1`
Here's all I get:
```
error: assembler label '' can not be undefined
Compiling hyper v1.4.1
Compiling console v0.15.7
Compiling proc-macro-crate v3.1.0
Compiling async-graphql-parser v7.0.7
Compiling crossterm v0.27.0
Compiling turborepo-vercel-api v0.1.0 (C:\Users\nicholas\turbo\crates\turborepo-vercel-api)
Compiling turborepo-ci v0.1.0 (C:\Users\nicholas\turbo\crates\turborepo-ci)
Compiling itertools v0.12.0
Compiling stability v0.1.1
Compiling lru v0.12.2
Compiling memoffset v0.7.1
Compiling encoding_rs v0.8.32
error: could not compile `tokio` (lib) due to 1 previous error
warning: build failed, waiting for other jobs to finish...
```
</p>
</details>
| O-windows,A-inline-assembly,T-compiler,C-bug,E-needs-mcve,O-AArch64 | low | Critical |
2,619,451,152 | PowerToys | Relative shortcut paths with New+ | ### Description of the new feature / enhancement
Can New+ add the feature to create a complete path shortcut based on the relative part shortcut in New+ template. Currently the tricks to add a relative path shortcut (via another explorer thread) is not being recognize by the application when prompting the diretory. (you cannot see the relative path when "Save as"....)
### Scenario when this would be used?
With relative parth shortcut, we can duplicate the shortcut per directory template
### Supporting information

| Needs-Triage | low | Minor |
2,619,529,082 | ollama | Feature request: Add CLI argument to specify a system prompt | I'd like to be able to set the system prompt from the call to `ollama` in my shell, rather than in the conversation. For example:
```sh
ollama run llama3.1 --system="Your nickname is 'Grass' now"
```
...or...
```sh
ollama run llama3.1 -s "system" "Your nickname is 'Grass' now"
```
With this ability, I could set up aliases in my shell profile so that I can run system-prompt-customized versions of a model with a single command.
```sh
alias grass="ollama run llama3.1 --system=\"Your nickname is 'Grass' now\""
```
It'd be even better if I could also specify a path to a text file with a system prompt (e.g. `ollama run llama3.1 --system-file="~/system_prompts/grass.txt"`, but that wouldn't be necessary.
This is unique from #807, whose fix only works in the conversation. | feature request | low | Minor |
2,619,537,447 | terminal | ENHANCED_KEY flag missing when releasing RightAlt | ### Windows Terminal version
current main
### Windows build number
10.0.19045.4894
### Other Software
_No response_
### Steps to reproduce
1. Run the following c++ code:
```c++
#include <iostream>
#include <windows.h>
int main()
{
INPUT_RECORD rec;
DWORD count;
while (true)
{
::ReadConsoleInputW(::GetStdHandle(STD_INPUT_HANDLE), &rec, 1, &count);
if (rec.EventType == KEY_EVENT)
{
std::cout << "type: KEY_EVENT" << std::hex
<< ", down: " << rec.Event.KeyEvent.bKeyDown
<< ", ENHANCED_KEY: " << !!(rec.Event.KeyEvent.dwControlKeyState & ENHANCED_KEY)
<< ", ctrl: 0x" << rec.Event.KeyEvent.dwControlKeyState
<< ", vcod: 0x" << rec.Event.KeyEvent.wVirtualKeyCode
<< ", scod: 0x" << rec.Event.KeyEvent.wVirtualScanCode
<< ", wchr: 0x" << rec.Event.KeyEvent.uChar.UnicodeChar << '\n';
}
}
}
```
2. Press LeftCtrl key, press RightCtrl key, check ENHANCED_KEY flag.
3. Press LeftAlt key, press RightAlt key, check ENHANCED_KEY flag.
### Expected Behavior
The ENHANCED_KEY flag is present when RightAlt is pressed and released.
The current conhost.exe is not affected by this issue.
### Actual Behavior
The ENHANCED_KEY flag is absent when RightAlt is released, but is present when RightAlt is pressed. | Help Wanted,Area-Input,Issue-Bug,Product-Terminal | low | Minor |
2,619,540,286 | rust | Tracking issue for ergonomic reference counting | This is a tracking issue for ergonomic reference counting, including:
- https://github.com/rust-lang/rfcs/pull/3680
- https://github.com/rust-lang/rust-project-goals/issues/107
...and other work.
The feature gate for this issue is `#![feature(ergonomic_clones)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [ ] Approve as lang experiment.
- [ ] Accept an RFC.
- [ ] Implement in nightly.
- [ ] Add documentation to the [dev guide][].
- See the [instructions][doc-guide].
- [ ] Add documentation to the [reference][].
- See the [instructions][reference-instructions].
- [ ] Add formatting for new syntax to the [style guide][].
- See the [nightly style procedure][].
- [ ] Stabilize.
- See the [instructions][stabilization-instructions].
[dev guide]: https://github.com/rust-lang/rustc-dev-guide
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[edition guide]: https://github.com/rust-lang/edition-guide
[nightly style procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[reference]: https://github.com/rust-lang/reference
[reference-instructions]: https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md
[stabilization-instructions]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[style guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
* Name of `UseCloned` trait which signals `x.use` to `clone`
* Precise set of `UseCloned` impls -- in particular any blanket impls we ought to be concerned about?
### Related
- https://github.com/rust-lang/rust-project-goals/issues/107
cc @spastorino @jkelleyrtp @rust-lang/lang
| T-lang,C-tracking-issue,F-ergonomic_clones | low | Critical |
2,619,576,305 | rust | Mixing `'static` and `'this` with iterators or vectors doesn't correctly shorten lifetime to the workable shorter version. | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
This is either a borrow checker bug, a library bug, or a diagnostics bug. I don't know which one.
I tried this code:
```rust
use std::collections::HashMap;
use std::collections::BTreeSet;
struct DataRoot {
entries: Vec<Entry>
}
impl DataRoot {
pub fn columns<'this>(&'this self) -> impl Iterator<Item = &'this str> {
let mut dynamic_columns = BTreeSet::new();
for entry in &self.entries {
dynamic_columns.extend(entry.dynamic_coluimns());
}
// This combines 'this and 'static, and I want the result to be 'this
// (which is *shorter*)
// This doesn't work for some reason, why? And what can be done about it?
Entry::leading_columns()
.chain(dynamic_columns.into_iter())
.chain(Entry::trailing_columns())
}
pub fn columns2<'this>(&'this self) -> impl Iterator<Item = &'this str> {
let mut dynamic_columns = BTreeSet::new();
for entry in &self.entries {
dynamic_columns.extend(entry.dynamic_coluimns());
}
// This doesn't work either!
let mut values = vec![];
values.extend(Entry::leading_columns());
values.extend(dynamic_columns.into_iter());
values.extend(Entry::trailing_columns());
values.into_iter()
}
pub fn columns3(&self) -> impl Iterator<Item = String> {
let mut dynamic_columns = BTreeSet::new();
for entry in &self.entries {
dynamic_columns.extend(entry.dynamic_coluimns());
}
// Okay, it makes no sense that this doesn't work.
// * I turned it into String, so it is owned
// * I collected into a vec and then returned that (which normally works)
let v: Vec<_> = Entry::leading_columns()
.chain(dynamic_columns.into_iter())
.chain(Entry::trailing_columns())
.map(|v| v.to_string()).collect();
v.into_iter()
}
}
struct Entry {
// Various fixed fields here...
// Other fields that are specific to this case, capture them dynamically
// #[serde(flatten)]
other: HashMap<String, String>
}
impl Entry {
fn leading_columns() -> impl Iterator<Item = &'static str> {
["Crate name", "URL", "Maintained", "License", "Std"].into_iter()
}
fn dynamic_coluimns(&self) -> impl Iterator<Item = &str> {
self.other.keys().map(|s| s.as_str())
}
fn trailing_columns() -> impl Iterator<Item = &'static str> {
["Notes"].into_iter()
}
}
```
* [Playground link](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=9a835ca6bf717b5b893c00987c1aa968)
* [URLO discussion](https://users.rust-lang.org/t/mixing-static-and-this-in-iterator-chain/120370)
I expected to see this happen: One of the following:
* All three cases works, and `'static` is shortened to `'this`
* The compiler errors nudge you towards a solution to the issue rather than suggesting incorrect things like adding `+ 'this`
Ideally (if this can't be fixed so that it works) the compiler error should nudge you in the direction that [jendrikw](https://users.rust-lang.org/u/jendrikw) suggested on URLO:
```rust
Entry::leading_columns().map(|s: &'static str| -> &'this str {s})
.chain(dynamic_columns.into_iter())
.chain(Entry::trailing_columns().map(|s: &'static str| -> &'this str {s}))
```
Instead, this happened:
```rust
Compiling playground v0.0.1 (/playground)
error: lifetime may not live long enough
--> src/lib.rs:19:9
|
9 | pub fn columns<'this>(&'this self) -> impl Iterator<Item = &'this str> {
| ----- lifetime `'this` defined here
...
19 | / Entry::leading_columns()
20 | | .chain(dynamic_columns.into_iter())
21 | | .chain(Entry::trailing_columns())
| |_____________________________________________^ returning this value requires that `'this` must outlive `'static`
|
help: to declare that `impl Iterator<Item = &'this str>` captures data from argument `self`, you can add an explicit `'this` lifetime bound
|
9 | pub fn columns<'this>(&'this self) -> impl Iterator<Item = &'this str> + 'this {
| +++++++
error[E0521]: borrowed data escapes outside of method
--> src/lib.rs:27:22
|
24 | pub fn columns2<'this>(&'this self) -> impl Iterator<Item = &'this str> {
| ----- ----------- `self` is a reference that is only valid in the method body
| |
| lifetime `'this` defined here
...
27 | for entry in &self.entries {
| ^^^^^^^^^^^^^
| |
| `self` escapes the method body here
| argument requires that `'this` must outlive `'static`
error[E0521]: borrowed data escapes outside of method
--> src/lib.rs:43:22
|
40 | pub fn columns3(&self) -> impl Iterator<Item = String> {
| -----
| |
| `self` is a reference that is only valid in the method body
| let's call the lifetime of this reference `'1`
...
43 | for entry in &self.entries {
| ^^^^^^^^^^^^^
| |
| `self` escapes the method body here
| argument requires that `'1` must outlive `'static`
For more information about this error, try `rustc --explain E0521`.
error: could not compile `playground` (lib) due to 3 previous errors
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
No crash, so not applicable?
</p>
</details>
| A-lifetimes,T-compiler,C-bug,T-types | low | Critical |
2,619,595,467 | tauri | [bug] assertion failed: `!bitmap.is_null()` | ### Describe the bug
```
thread 'main' panicked at C:\Users\Rose\.cargo\registry\src\index.crates.io-6f17d22bba15001f\softbuffer-0.4.6\src\backends\win32.rs:99:9:
assertion failed: !bitmap.is_null()
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
[1028/172140.698:ERROR:window_impl.cc(121)] Failed to unregister class Chrome_WidgetWin_0. Error = 1412
```
Crash occured during `pnpm tauri dev`. Happy to provide any other information needed.
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22621 x86_64 (X64)
✔ WebView2: 130.0.2849.52
✔ MSVC:
- Visual Studio Build Tools 2019
- Visual Studio Build Tools 2022
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 22.9.0
- pnpm: 9.12.2
- yarn: 1.22.19
- npm: 10.8.3
- bun: 1.1.31
- deno: deno 2.0.2
[-] Packages
- tauri 🦀: 2.0.6
- tauri-build 🦀: 2.0.2
- wry 🦀: 0.46.3
- tao 🦀: 0.30.3
- @tauri-apps/api : 2.0.3
- @tauri-apps/cli : 2.0.5
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/document/00192d1c97c09-000-T0000000000000-00
- framework: Svelte
- bundler: Vite
```
### Stack trace
```text
processes was not running with `RUST_BACKTRACE=1` when the crash occured.
```
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,619,597,158 | flutter | [iOS][a11y] Widgets inside OverlayPortal don't display iOS number labels for VoiceControl | ### Steps to reproduce
1. Turn Voice Control on
2. Show numbers
3. Create a widget with actionable widgets, such as buttons inside an OverlayPortal
4. Trigger the OverlayPortal
See buttons inside the OverlayPortal widget don't show numbers
b/368207734
### Expected results
Buttons should display numbers, like other buttons inside the main Widget tree
### Actual results
No iOS number labels are shown inside the OverlayPortal
### Code sample
```dart
class OverlayPortalExampleApp extends StatelessWidget {
const OverlayPortalExampleApp({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('OverlayPortal Example')),
body: const Center(child: ClickableTooltipWidget()),
);
}
}
class ClickableTooltipWidget extends StatefulWidget {
const ClickableTooltipWidget({super.key});
@override
State<StatefulWidget> createState() => ClickableTooltipWidgetState();
}
class ClickableTooltipWidgetState extends State<ClickableTooltipWidget> {
final OverlayPortalController _tooltipController = OverlayPortalController();
@override
Widget build(BuildContext context) {
return TextButton(
onPressed: _tooltipController.toggle,
child: OverlayPortal(
controller: _tooltipController,
overlayChildBuilder: (BuildContext context) {
return Positioned(
right: 50,
bottom: 50,
child: Material(
color: Colors.grey,
child: Column(
children: [
TextButton(
onPressed: _tooltipController.toggle,
child: Text('Button1'),
),
TextButton(
onPressed: _tooltipController.toggle,
child: Text('Button2'),
),
TextButton(
onPressed: _tooltipController.toggle,
child: Text('Button3'),
),
],
),
),
);
},
child: const Text('Press to show/hide button'),
),
);
}
}
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel google3, on macOS 14.7
• Framework revision b9f62f6f9f (0 days ago), 2024-10-28T00:00:00.000
• Engine revision 2522789c41
• Dart version a75848f922
```
</details>
| platform-ios,framework,a: accessibility,has reproducible steps,P1,customer: quake (g3),found in release: 3.24,team-accessibility,triaged-accessibility,found in release: 3.27 | medium | Major |
2,619,600,914 | pytorch | LibTorch build error on Windows for CUDA version (debug/release) | ### 🐛 Describe the bug
I downloaded versions of LibTorch supporting computations on both CPU and CUDA 12.4.
I created a C++ project in Visual Studio 2022, which is built using CMake. The project includes presets for building the code in CPU (debug/release) and CUDA (debug/release) versions. Each preset set points to a different path to the directory with the unpacked binary version of LibTorch.
Building and running the sample C++ code for LibTorch in debug/release mode for CPU works as expected without any issues. However, when I choose to build the code with CUDA support (e.g., for debug), I receive error messages related to Caffe2 build failures. I see nvTools header files on my machine but no any lib, dlls and so on (maybe it's just in a form of a header file only now).
I've attached the build log generated by CMake below.
```txt
1> CMake generation started for configuration: 'win-cuda-debug'.
1> Environment settings: // A few elems
...
1> DevEnvDir=C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\
1> ExtensionSdkDir=C:\Program Files (x86)\Microsoft SDKs\Windows Kits\10\ExtensionSDKs
1> is_x64_arch=true
1> VSCMD_ARG_app_plat=Desktop
1> VSCMD_ARG_HOST_ARCH=x64
1> VSCMD_ARG_no_logo=1
1> VSCMD_ARG_TGT_ARCH=x64
1> VSCMD_DEBUG=5
1> VSCMD_VER=17.11.5
1> VS_Perf_Session_GCHeapCount=2
1> OS=Windows_NT
1> NVTOOLSEXT_PATH=C:\Program Files\NVIDIA Corporation\NvToolsExt\
1> CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4
1> LibTorch=D:\_ML\libTorch\2.5.0
1> CUDA_ROOT=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4
1> CUDA_PATH_V12_5=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.5
1> CUDA_PATH_V12_4=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4
1> PROCESSOR_ARCHITECTURE=AMD64
1> Command line: "C:\WINDOWS\system32\cmd.exe" /c "%SYSTEMROOT%\System32\chcp.com 65001 >NUL && "C:\PROGRAM FILES\MICROSOFT VISUAL STUDIO\2022\PROFESSIONAL\COMMON7\IDE\COMMONEXTENSIONS\MICROSOFT\CMAKE\CMake\bin\cmake.exe" -G "Ninja" -DCMAKE_TOOLCHAIN_FILE:STRING="D:\Vcpkg/scripts/buildsystems/vcpkg.cmake" -DCMAKE_C_COMPILER:STRING="cl.exe" -DCMAKE_CXX_COMPILER:STRING="cl.exe" -DVCPKG_TARGET_TRIPLET:STRING="x64-windows" -DCMAKE_BUILD_TYPE:STRING="Debug" -DTorch_DIR:STRING="D:\_ML\libTorch\2.5.0/cuda/debug/libtorch/share/cmake/Torch" -DFORCE_LINK_SYMBOL:STRING="?ignore_this_library_placeholder@@YAHXZ" -DCMAKE_INSTALL_PREFIX:PATH="D:/Projects/TorchTrade/out/install/win-cuda-debug" -DCMAKE_MAKE_PROGRAM="C:\PROGRAM FILES\MICROSOFT VISUAL STUDIO\2022\PROFESSIONAL\COMMON7\IDE\COMMONEXTENSIONS\MICROSOFT\CMAKE\Ninja\ninja.exe" "D:\Projects\TorchTrade" 2>&1"
...
1> Working directory: D:/Projects/TorchTrade/out/build/win-cuda-debug
1> [CMake] -- Running vcpkg install
1> [CMake] Detecting compiler hash for triplet x64-windows...
1> [CMake] Compiler found: C:/Program Files/Microsoft Visual Studio/2022/Professional/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe
1> [CMake] All requested packages are currently installed.
1> [CMake] Total install time: 400 ns
1> [CMake] The package fmt provides CMake targets:
1> [CMake]
1> [CMake] find_package(fmt CONFIG REQUIRED)
1> [CMake] target_link_libraries(main PRIVATE fmt::fmt)
1> [CMake]
1> [CMake] # Or use the header-only version
1> [CMake] find_package(fmt CONFIG REQUIRED)
1> [CMake] target_link_libraries(main PRIVATE fmt::fmt-header-only)
1> [CMake]
1> [CMake] The package gtest is compatible with built-in CMake targets:
1> [CMake]
1> [CMake] enable_testing()
1> [CMake]
1> [CMake] find_package(GTest CONFIG REQUIRED)
1> [CMake] target_link_libraries(main PRIVATE GTest::gtest GTest::gtest_main GTest::gmock GTest::gmock_main)
1> [CMake]
1> [CMake] add_test(AllTestsInMain main)
1> [CMake]
1> [CMake] The package nlohmann-json provides CMake targets:
1> [CMake]
1> [CMake] find_package(nlohmann_json CONFIG REQUIRED)
1> [CMake] target_link_libraries(main PRIVATE nlohmann_json::nlohmann_json)
1> [CMake]
1> [CMake] The package nlohmann-json can be configured to not provide implicit conversions via a custom triplet file:
1> [CMake]
1> [CMake] set(nlohmann-json_IMPLICIT_CONVERSIONS OFF)
1> [CMake]
1> [CMake] For more information, see the docs here:
1> [CMake]
1> [CMake] https://json.nlohmann.me/api/macros/json_use_implicit_conversions/
1> [CMake]
1> [CMake] The package spdlog provides CMake targets:
1> [CMake]
1> [CMake] find_package(spdlog CONFIG REQUIRED)
1> [CMake] target_link_libraries(main PRIVATE spdlog::spdlog)
1> [CMake]
1> [CMake] # Or use the header-only version
1> [CMake] find_package(spdlog CONFIG REQUIRED)
1> [CMake] target_link_libraries(main PRIVATE spdlog::spdlog_header_only)
1> [CMake]
1> [CMake] -- Running vcpkg install - done
1> [CMake] -- Caffe2: CUDA detected: 12.4
1> [CMake] -- Caffe2: CUDA nvcc is: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.4/bin/nvcc.exe
1> [CMake] -- Caffe2: CUDA toolkit directory: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.4
1> [CMake] -- Caffe2: Header version is: 12.4
1> [CMake] CMake Warning at D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Caffe2/public/cuda.cmake:140 (message):
1> [CMake] Failed to compute shorthash for libnvrtc.so
1> [CMake] Call Stack (most recent call first):
1> [CMake] D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:86 (include)
1> [CMake] D:/vcpkg/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
1> [CMake] D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
1> [CMake] D:/vcpkg/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
1> [CMake] TorchTrade/CMakeLists.txt:5 (find_package)
1> [CMake]
1> [CMake]
1> [CMake] CMake Warning (dev) at C:/Program Files/Microsoft Visual Studio/2022/Professional/Common7/IDE/CommonExtensions/Microsoft/CMake/CMake/share/cmake-3.29/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
1> [CMake] The package name passed to `find_package_handle_standard_args` (nvtx3) does
1> [CMake] not match the name of the calling package (Caffe2). This can lead to
1> [CMake] problems in calling code that expects `find_package` result variables
1> [CMake] (e.g., `_FOUND`) to follow a certain pattern.
1> [CMake] Call Stack (most recent call first):
1> [CMake] D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Caffe2/public/cuda.cmake:174 (find_package_handle_standard_args)
1> [CMake] D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:86 (include)
1> [CMake] D:/vcpkg/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
1> [CMake] D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
1> [CMake] D:/vcpkg/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
1> [CMake] TorchTrade/CMakeLists.txt:5 (find_package)
1> [CMake] This warning is for project developers. Use -Wno-dev to suppress it.
1> [CMake]
1> [CMake] -- Could NOT find nvtx3 (missing: nvtx3_dir)
1> [CMake] CMake Warning at D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Caffe2/public/cuda.cmake:180 (message):
1> [CMake] Cannot find NVTX3, find old NVTX instead
1> [CMake] Call Stack (most recent call first):
1> [CMake] D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:86 (include)
1> [CMake] D:/vcpkg/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
1> [CMake] D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
1> [CMake] D:/vcpkg/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
1> [CMake] TorchTrade/CMakeLists.txt:5 (find_package)
1> [CMake]
1> [CMake]
1> [CMake] -- USE_CUDNN is set to 0. Compiling without cuDNN support
1> [CMake] -- USE_CUSPARSELT is set to 0. Compiling without cuSPARSELt support
1> [CMake] -- USE_CUDSS is set to 0. Compiling without cuDSS support
1> [CMake] -- USE_CUFILE is set to 0. Compiling without cuFile support
1> [CMake] -- Autodetected CUDA architecture(s): 7.5
1> [CMake] -- Added CUDA NVCC flags for: -gencode;arch=compute_75,code=sm_75
1> [CMake] -- Configuring done (24.7s)
1> [CMake] CMake Error at D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Caffe2/public/cuda.cmake:182 (set_property):
1> [CMake] The link interface of target "torch::nvtoolsext" contains:
1> [CMake]
1> [CMake] CUDA::nvToolsExt
1> [CMake]
1> [CMake] but the target was not found. Possible reasons include:
1> [CMake]
1> [CMake] * There is a typo in the target name.
1> [CMake] * A find_package call is missing for an IMPORTED target.
1> [CMake] * An ALIAS target is missing.
1> [CMake]
1> [CMake] Call Stack (most recent call first):
1> [CMake] D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:86 (include)
1> [CMake] D:/vcpkg/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
1> [CMake] D:/_ML/libTorch/2.5.0/cuda/debug/libtorch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
1> [CMake] D:/vcpkg/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
1> [CMake] TorchTrade/CMakeLists.txt:5 (find_package)
1> [CMake] -- Generating done (0.0s)
1> [CMake] CMake Warning:
1> [CMake] Manually-specified variables were not used by the project:
1> [CMake]
1> [CMake] CMAKE_TOOLCHAIN_FILE
1> [CMake]
1> [CMake]
1> [CMake] CMake Generate step failed. Build files cannot be regenerated correctly.
1> 'C:\WINDOWS\system32\cmd.exe' '/c ... ninja.exe" "D:\Projects\TorchTrade" 2>&1"' returned with exit code: 1'.
```
### Versions
Libtorch 2.5.0, cuda 12.4 for Windows
https://download.pytorch.org/libtorch/cu124/libtorch-win-shared-with-deps-debug-2.5.0%2Bcu124.zip
I have CUDA 12.4 Toolkit installed.
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jbschlosser @ptrblck @msaroufim | module: build,module: windows,module: cpp,module: cuda,triaged | low | Critical |
2,619,605,015 | next.js | Docs: Node.js version | ### What is the documentation issue?
In a Getting Start guide you're suggesting to use Node.js `v18.18.0` and above, but for me it doesn't work with Node.js `v19.0.0` and `v19.5.0` also.
Maybe it'll be a useful catch :)
### Is there any context that might help us understand?
`MacOS Sequoia v15.0.1`
### Does the docs page already exist? Please link to it.
https://nextjs.org/docs/getting-started/installation | bug | low | Minor |
2,619,619,987 | pytorch | torch.compile + FSDP1 CPU offloading + PT lightning validation loop throws an error | ### 🐛 Describe the bug
I have a small script to reproduce how a toy model and the following three features lead to an error when combined:
1. torch.compile
2. FSDP1 with cpu offloading
3. PyTorch Lightning's validation step (I'm not sure what exactly is the culprit here, I suspect that it might be related to no_grad)
e2e repro script: https://gist.github.com/vkuzo/7efbeb68a04f001c750903c63321a883 (requires torch, torchvision, PyTorch lightning)
run command:
```
CUDA_VISIBLE_DEVICES=0,1 python test_lightning.py
```
stack trace from script above: https://gist.github.com/vkuzo/43858c6248c335747747dbfeb7b39405
note that turning off compile, or turning off CPU offloading, or turning off validation all fix the error
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241023+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.11.0 (main, Mar 1 2023, 18:26:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_zion_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 74%
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4792.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241023+cu121
[pip3] torchao==0.7.0+gitd2526126
[pip3] torchdata==0.8.0
[pip3] torchmetrics==1.5.0
[pip3] torchvision==0.20.0.dev20241023+cu121
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241023+cu121 pypi_0 pypi
[conda] torchao 0.7.0+gitd2526126 dev_0 <develop>
[conda] torchdata 0.8.0 pypi_0 pypi
[conda] torchmetrics 1.5.0 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241023+cu121 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang @penguinwu @ezyang | oncall: distributed,triaged,module: fsdp,oncall: pt2,pt2d-triage-nov2024 | low | Critical |
2,619,683,236 | react-native | TextInput value cannot be manipulated via setNativeProps after user input on iOS | ### Description
When an uncontrolled TextInput has had its value manipulated via `setNativeProps`, receives user input and is then manipulated again via `setNativeProps`, the displayed value does not change.
This looks to be a New Arch issue and the issue is not reproducible in 0.76.0 on the old architecture.
### Steps to reproduce
1. Install the application with `yarn ios`
2. Press "Press me!"
3. Note that the TextInput contains the text "bananas"
5. Clear the input field
6. Press "Press me!"
7. Note that the input field remains empty
### React Native Version
0.76.0
### Affected Platforms
Runtime - iOS
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 14.7
CPU: (16) arm64 Apple M3 Max
Memory: 2.53 GB / 48.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.20.4
path: /opt/homebrew/Cellar/node@18/18.20.4_1/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 10.7.0
path: /opt/homebrew/Cellar/node@18/18.20.4_1/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK: Not Found
IDEs:
Android Studio: Not Found
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java: Not Found
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0-alpha.2
wanted: 15.0.0-alpha.2
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.0
wanted: 0.76.0
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
None
```
### Reproducer
https://github.com/mhoran/new-arch-set-text-repro/
### Screenshots and Videos
_No response_ | Platform: iOS,Issue: Author Provided Repro,Component: TextInput,Type: New Architecture | low | Minor |
2,619,698,451 | flutter | Migrate some framework mac hostonly tests to be pinned to arm64 | The intel macs are increasingly experiencing hardware issues, so to de-risk framework CI, we will slowly migrate framework tests from intel mac to arm mac a few at a time. | team-infra,P2,triaged-infra | low | Major |
2,619,699,148 | go | proposal: os: add iterator variant of File.ReadDir | ### Proposal Details
Today we have:
https://pkg.go.dev/os#File.ReadDir etc
```go
func (f *File) ReadDir(n int) ([]DirEntry, error)
```
That `n` feels so antiquated now that we have iterators! I propose that we add an iterator-based variant. 😄
/cc @neild | Proposal | medium | Critical |
2,619,707,520 | neovim | Setting `winfixbuf` immediately after startup will cause window duplication | ### Problem
In my case, using `vim.schedule` fixed this
Ref: https://github.com/kawre/leetcode.nvim/issues/141
### Steps to reproduce
```lua
-- Run this file as `nvim --clean -u minimal.lua`
for name, url in pairs({
-- ADD PLUGINS _NECESSARY_ TO REPRODUCE THE ISSUE, e.g:
-- 'https://github.com/author1/plugin1',
-- 'https://github.com/author2/plugin2',
}) do
local install_path = vim.fn.fnamemodify("nvim_issue/" .. name, ":p")
if vim.fn.isdirectory(install_path) == 0 then
vim.fn.system({ "git", "clone", "--depth=1", url, install_path })
end
vim.opt.runtimepath:append(install_path)
end
-- ADD INIT.LUA SETTINGS _NECESSARY_ FOR REPRODUCING THE ISSUE
vim.api.nvim_set_option_value("winfixbuf", true, {})
```
### Expected behavior
.
### Nvim version (nvim -v)
v0.10.2
### Vim (not Nvim) behaves the same?
no, 9.1.785
### Operating system/version
6.11.5-arch1-1
### Terminal name/version
kitty
### $TERM environment variable
xterm-kitty
### Installation
pacman | bug,has:workaround,startup,netrw | low | Minor |
2,619,707,866 | pytorch | torch.nn.AvgPool2d works with long on cpu but not gpu | ### 🐛 Describe the bug
Trying to average pool over integers work on CPU but fails on CUDA [colab](https://colab.research.google.com/drive/1UufNjNlX6dwPFH0D_ooBKURVPhERziHw?usp=sharing)
Minimal repro:
```python
import torch
input_tensor = torch.randint(3, 5, (20, 16, 50, 32))
output_cpu = torch.nn.AvgPool2d(3, stride=2)(input_tensor)
output_gpu = torch.nn.AvgPool2d(3, stride=2)(input_tensor.cuda())
# RuntimeError: "avg_pool2d_out_cuda_frame" not implemented for 'Long'
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 15.0.0 (git@github.com:llvm/llvm-project.git 4ba6a9c9f65bbc8bd06e3652cb20fd4dfc846137)
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700KF
Stepping: 1
CPU MHz: 4118.642
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 16 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 py310heeff2f4_0
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ptrblck @msaroufim @mikaylagawarecki | module: cuda,triaged,module: pooling | low | Critical |
2,619,723,569 | pytorch | `_adjust_num_blocks_and_indices` gives wrong adjusted block mask | ### 🐛 Describe the bug
Repro:
```python
import torch
from torch.nn.attention.flex_attention import (
_adjust_num_blocks_and_indices,
create_block_mask,
)
def mask_mod(b, h, q, kv):
b1 = torch.where(kv >= 0, True, False) & torch.where(kv < 128, True, False)
b2 = torch.where(kv >= 384, True, False) & torch.where(kv < 512, True, False)
return b1 | b2
block_mask = create_block_mask(mask_mod, 1, 1, 128, 640)
adjusted_full_kv_num_blocks, adjusted_full_kv_indices = _adjust_num_blocks_and_indices(
block_mask.full_kv_num_blocks,
block_mask.full_kv_indices,
new_num_rows=1,
new_num_cols=3,
)
print(f"original full_kv_num_blocks: {block_mask.full_kv_num_blocks}")
print(f"original full_kv_indices: {block_mask.full_kv_indices}")
print(f"adjusted full_kv_num_blocks: {adjusted_full_kv_num_blocks}")
print(f"adjusted full_kv_indices: {adjusted_full_kv_indices}")
"""
original full_kv_num_blocks: tensor([[[2]]], device='cuda:0', dtype=torch.int32)
original full_kv_indices: tensor([[[[0, 3, 1, 2, 4]]]], device='cuda:0', dtype=torch.int32)
adjusted full_kv_num_blocks: tensor([[[2]]], device='cuda:0', dtype=torch.int32)
adjusted full_kv_indices: tensor([[[[0, 3, 1]]]], device='cuda:0', dtype=torch.int32)
Expected adjusted full_kv_num_blocks: tensor([[[1]]], device='cuda:0', dtype=torch.int32)
"""
```
### Versions
PyTorch version: 2.6.0a0+git853e0fa
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-0_fbk12_hardened_11583_g0bef9520ca2b-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.1.1
/usr/lib64/libcudnn_adv.so.9.1.1
/usr/lib64/libcudnn_cnn.so.9.1.1
/usr/lib64/libcudnn_engines_precompiled.so.9.1.1
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.1.1
/usr/lib64/libcudnn_graph.so.9.1.1
/usr/lib64/libcudnn_heuristic.so.9.1.1
/usr/lib64/libcudnn_ops.so.9.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 184
On-line CPU(s) list: 0-183
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 184
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 11.5 MiB (184 instances)
L1i cache: 11.5 MiB (184 instances)
L2 cache: 92 MiB (184 instances)
L3 cache: 2.9 GiB (184 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-183
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0a0+git853e0fa
[conda] blas 1.0 mkl
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.10 py310h5eee18b_0
[conda] mkl_random 1.2.7 py310h1128e8f_0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0a0+git853e0fa dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
cc @ezyang @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang | triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,619,745,517 | svelte | onintrostart does not respect delay | ### Describe the bug
The onintrostart fires immediately, before the element has been mounted ignoring the `delay` parameter. This is in Svelte 4 and 5
### Reproduction
https://svelte.dev/playground/9e45fd9f78974bb6a4b7714b1eed5417?version=5.1.3
### Logs
_No response_
### System Info
```shell
System:
OS: macOS 14.6.1
CPU: (8) arm64 Apple M1
Memory: 107.25 MB / 8.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.10.0 - ~/Library/Caches/fnm_multishells/2228_1725926009881/bin/node
npm: 10.2.3 - ~/Library/Caches/fnm_multishells/2228_1725926009881/bin/npm
pnpm: 9.4.0 - ~/Library/pnpm/pnpm
Browsers:
Chrome: 130.0.6723.70
Safari: 17.6
```
### Severity
annoyance | bug,transition/animation | low | Critical |
2,619,762,975 | pytorch | shift operators not supporting integral types | ### 🐛 Describe the bug
`bitwise_right_shift` only works for uint8, not for integrals as documented here: https://pytorch.org/docs/stable/generated/torch.bitwise_right_shift.html
```py
>>> import torch
>>> tensor = torch.randn([4, 4], dtype=torch.bfloat16)
>>> tensor_view = tensor.view(torch.uint32).view(-1)
>>> torch.bitwise_right_shift(tensor_view, 1)
RuntimeError: "rshift_cpu" not implemented for 'UInt32'
```
If I try uint16, I get a similar error:
```py
>>> import torch
>>> tensor = torch.randn([4, 4], dtype=torch.bfloat16)
>>> tensor_view = tensor.view(torch.uint16).view(-1)
>>> torch.bitwise_right_shift(tensor_view, 1)
RuntimeError: "rshift_cpu" not implemented for 'UInt16'
```
If I try on CUDA, the error changes to "rshift_cuda".
```py
>>> import torch
>>> tensor = torch.randn([4, 4], dtype=torch.bfloat16, device="cuda")
>>> tensor_view = tensor.view(torch.uint32).view(-1)
>>> torch.bitwise_right_shift(tensor_view, 1)
RuntimeError: "rshift_cuda" not implemented for 'UInt32'
```
### Versions
```
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.118
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8462Y+
Stepping: 8
CPU MHz: 4100.000
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 128 MiB
L3 cache: 120 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] torch==2.5.0
[pip3] torchdata==0.9.0
[pip3] triton==3.1.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.24 pypi_0 pypi
[conda] torch 2.5.0 pypi_0 pypi
[conda] torchdata 0.9.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
``` | triaged,module: unsigned int | low | Critical |
2,619,779,803 | godot | Can't record microphone (Windows 10 Pro Version 22H2) | ### Tested versions
I'm having issues in every version I tested (can't remember them all)
My Project is running on Godot v4.3 ([GodotSteam 4.11](https://github.com/GodotSteam/GodotSteam/releases/tag/v4.11))
Also tried the Recording Demo but the problem hasn't vanished
### System information
Windows 10 Pro - GodotSteam 4.11 - OpenGL3 - Forward+
### Issue description
I want to implement a voice chat and therefore need to record the microphone. I can hear myself when I unmute my mute-bus - I have a record and a mute-bus. But when trying to record the microphone I only get an array of 0s. When monitoring the decibels everything seems to be right.
I tried this on another system (Windows 10 Home) and it worked. Apps like Discord work without any troubles on my system. I even tried recording audio with a custom Python Script and I got a recording
This is really frustrating and I would appreciate some help :-)
### Steps to reproduce
To reproduce this you'd just need a Input AudioStream with a Recording Bus which uses a AudioEffectRecord and a output like an AudioStreamPlayer or any other way to know your get_recording().data
### Minimal reproduction project (MRP)
This Script is getting called every X amount of frames and if a certain decibel threshold has been surpassed:
```
func gatherRecordingData() -> AudioStreamWAV:
var sample:AudioStreamWAV = AudioStreamWAV.new()
sample.mix_rate = 22050
print(effect.get_recording().data)
sample.data = effect.get_recording().data
sample.format = AudioStreamWAV.FORMAT_16_BITS
print(sample.data) #returns just 0s
return sample
| bug,needs testing,topic:audio | low | Minor |
2,619,785,276 | pytorch | [pgnccl] unstable restarting of PGs | ### 🐛 Describe the bug
We are writing a simple UT to verify if NCCL comms can be restarted (abort first, then re-initialize) e.g., https://github.com/pytorch/pytorch/pull/139123. However we found the test is unstable under non-blocking mode. Some time errors like below and we are curious on the root cause
```
[I1028 13:05:11.985271293 ProcessGroupNCCL.cpp:2485] [PG ID 1 PG GUID 0(default_pg) Rank 1] NCCL_DEBUG: INFO
devgpu009:2131383:2143887 [1] NCCL INFO Using non-device net plugin version 0
devgpu009:2131383:2143887 [1] NCCL INFO Using network Socket
devgpu009:2131383:2143887 [1] misc/socket.cc:484 NCCL WARN socketStartConnect: Connect to 2803:6080:a020:37e7::1<45353> failed : Software caused connection abort
devgpu009:2131383:2143887 [1] NCCL INFO misc/socket.cc:567 -> 2
devgpu009:2131383:2143887 [1] NCCL INFO misc/socket.cc:621 -> 2
devgpu009:2131383:2143887 [1] NCCL INFO bootstrap.cc:285 -> 2
devgpu009:2131383:2143887 [1] NCCL INFO init.cc:1534 -> 2
devgpu009:2131383:2143887 [1] NCCL INFO group.cc:64 -> 2 [Async thread]
devgpu009:2131383:2143886 [0] NCCL INFO group.cc:64 -> 2 [Async thread]
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] Caught exception:
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] Traceback (most recent call last):
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] File "/home/sqzhang/.conda/envs/sqzhang_1/lib/python3.12/site-packages/torch/testing/_internal/common_distributed.py", line 692, in run_test
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] getattr(self, test_name)()
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] File "/home/sqzhang/.conda/envs/sqzhang_1/lib/python3.12/site-packages/torch/testing/_internal/common_distributed.py", line 565, in wrapper
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] fn()
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] File "/home/sqzhang/.conda/envs/sqzhang_1/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2996, in wrapper
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] method(*args, **kwargs)
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] File "/home/sqzhang/pytorch/test/distributed/test_c10d_nccl.py", line 382, in test_restart_pg
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] dist.all_reduce(t1)
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] File "/home/sqzhang/.conda/envs/sqzhang_1/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 83, in wrapper
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] return func(*args, **kwargs)
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] ^^^^^^^^^^^^^^^^^^^^^
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] File "/home/sqzhang/.conda/envs/sqzhang_1/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 2698, in all_reduce
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] work = group.allreduce([tensor], opts)
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] torch.distributed.DistBackendError: NCCL error in: /home/sqzhang/pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:36, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] Last error:
[rank1]:E1028 13:05:11.472000 2131383 site-packages/torch/testing/_internal/common_distributed.py:699] socketStartConnect: Connect to 2803:6080:a020:37e7::1<45353> failed : Software caused connection abort
```
### Versions
Resolving fwdproxy (fwdproxy)... 2401:db00:31ff:ff2f:face:b00c:0:1e10
Connecting to fwdproxy (fwdproxy)|2401:db00:31ff:ff2f:face:b00c:0:1e10|:8080... connected.
Proxy request sent, awaiting response... 200 OK
Length: 24366 (24K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[=============================================================================================================>] 23.79K --.-KB/s in 0.002s
2024-10-28 16:36:32 (10.1 MB/s) - ‘collect_env.py’ saved [24366/24366]
(sqzhang_1) [sqzhang@devgpu009.cln1 ~/pytorch (4a75dbd7)]$ python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: 18.1.8 (CentOS 18.1.8-3.el9)
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.12.0 | packaged by Anaconda, Inc. | (main, Oct 2 2023, 17:29:18) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
Is CUDA available: N/A
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.9.4
/usr/lib64/libcudnn_adv_infer.so.8.9.4
/usr/lib64/libcudnn_adv_train.so.8.9.4
/usr/lib64/libcudnn_cnn_infer.so.8.9.4
/usr/lib64/libcudnn_cnn_train.so.8.9.4
/usr/lib64/libcudnn_ops_infer.so.8.9.4
/usr/lib64/libcudnn_ops_train.so.8.9.4
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn.so.8.9.2
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.2
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.2
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.2
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.2
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.2
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] optree==0.13.0
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] pytorch-triton==2.2.0+e28a256d71
[pip3] torch==2.3.0a0+git5894af8
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] pytorch-triton 2.2.0+e28a256d71 pypi_0 pypi
[conda] torch 2.3.0a0+gitdeb4f2c pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,module: c10d | low | Critical |
2,619,790,926 | pytorch | [FlexAttention] Update the way we generate kernel options | # Summary
FlexAttention differs from other TritonTemplates currently supported in inductor (mm variants). Traditional templates are typically only enabled when compiling with max-autotune and sweep over many configurations, trying all potential options (relying on aten as a fallback). This approach allows templates to try both erroring and successful configs, with no issues as long as at least one viable configuration is found (typically at least Aten).
However, FlexAttention has encountered scenarios where the default config for certain hardware types runs out of shared memory. For example, this issue was reported in torchtune: https://github.com/pytorch/torchtune/pull/1835#discussion_r1810401819.
To address this issue, we've made the kernel options user-modifiable. However, this should ideally be a last resort rather than the only way to run FlexAttention.
Beyond the hard errors with compile + Flex, there are also potential performance cliffs with default configs. For instance, we've observed that default configs can reduce occupancy by half, leading to approximately 75% performance degradation on certain mask modes.
## Proposal
We should enable smarter kernel selection based on available resources at runtime. This involves two key steps:
1. Use default device properties to more intelligently pick the base config option without autotune.
2. Investigate what compilation metrics are returned by Triton to:
- Pick more optimal configurations
- Prune the search space in max-autotune compilation
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @ydwu4 @bdhirsh @Chillee @yanboliang @BoyuanFeng | triaged,oncall: pt2,module: inductor,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,619,856,070 | deno | node-fetch works - Deno fetch fails to make api calls to openai | Version: Deno 2.0.3
I have tested this on both a Mac and Windows WSL
Using node-fetch always works.
When using Deno's default fetch if we increase the payload size I will get an error.
```
failed with error: error sending request from x.x.x.x:53270 for https://api.openai.com/v1/chat/completions (172.66.0.243:443): client error (SendRequest): http2 error: stream error received: unspecific protocol error detected
```
If we reduce the promptLength length to 200; We will get successful requests.
I haven't seen this happen with other API's yet. It looks like OpenAI Proxy's through Cloudflare.
```typescript
import { faker } from '@faker-js/faker';
import { encoding_for_model } from "tiktoken";
import {env} from "node:process";
import fetch from "node-fetch";
const enc = encoding_for_model("gpt-4o-mini");
const promptLength = 2000; // A lower multiplier like 200 will usually work. But increasing the payload causes errors.
const numRequests = 100; //
const prompt = `${faker.lorem.paragraph(promptLength)}\n\nCreate a react app.`;
const tokens = enc.encode(prompt);
console.log(tokens.length);
async function makeRequest(id) {
try {
const startTime = Date.now();
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${env["OPENAI_API_KEY"]}`
},
body: JSON.stringify({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: prompt }]
})
});
if(response.status == 200){
const data = await response.json();
const elapsedTime = Date.now() - startTime;
const tokensPerSecond = (data.usage.completion_tokens / elapsedTime) * 1000;
return { success: true, elapsedTime, tokensPerSecond };
} else {
return { success:false, error: await response.text() }
}
} catch (error) {
return { success: false, error: error.message };
}
}
async function main() {
const numRequests = 100;
console.log(`Starting performance test of ${numRequests} parallel requests...`);
let remainingIds = [...Array(numRequests).keys()];
const promises = [];
for (let i = 0; i < numRequests; i++) {
promises.push(
new Promise(async (resolve) => {
console.log(`request ${i} started`);
const result = await makeRequest(i);
remainingIds = remainingIds.filter(id => id !== i);
if (result.success) {
console.log(`request ${i} completed in ${result.elapsedTime}ms with ${result.tokensPerSecond.toFixed(2)} tokens/sec`);
} else {
console.log(`request ${i} failed with error: ${result.error}`);
console.log(`Remaining requests: ${remainingIds.join(', ')}`);
}
resolve(result);
})
);
}
const results = await Promise.all(promises);
const successfulRequests = results.filter(r => r.success);
const failedRequests = results.filter(r => !r.success);
const responseTimes = successfulRequests.map(r => r.elapsedTime);
const tokenSpeeds = successfulRequests.map(r => r.tokensPerSecond);
const avgTime = responseTimes.length ? responseTimes.reduce((a, b) => a + b, 0) / responseTimes.length : 0;
const minTime = responseTimes.length ? Math.min(...responseTimes) : 0;
const maxTime = responseTimes.length ? Math.max(...responseTimes) : 0;
const avgTokensPerSecond = tokenSpeeds.length ? tokenSpeeds.reduce((a, b) => a + b, 0) / tokenSpeeds.length : 0;
const minTokensPerSecond = tokenSpeeds.length ? Math.min(...tokenSpeeds) : 0;
const maxTokensPerSecond = tokenSpeeds.length ? Math.max(...tokenSpeeds) : 0;
console.log(`Performance results:
Successful requests: ${successfulRequests.length}
Failed requests: ${failedRequests.length}
Average response time: ${avgTime.toFixed(2)}ms
Minimum response time: ${minTime}ms
Maximum response time: ${maxTime}ms
Average tokens per second: ${avgTokensPerSecond.toFixed(2)}
Minimum tokens per second: ${minTokensPerSecond.toFixed(2)}
Maximum tokens per second: ${maxTokensPerSecond.toFixed(2)}
`);
if (failedRequests.length > 0) {
console.log('\nError messages from failed requests:');
failedRequests.forEach((req, index) => {
console.log(`${index + 1}. ${req.error}`);
});
}
}
main().catch(console.error);
``` | bug,node:http | low | Critical |
2,619,869,019 | pytorch | DISABLED test_manual_with_data_parallel_dp_type_DDP_ScheduleClass0_use_new_runtime_False (__main__.ComposabilityTest) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_manual_with_data_parallel_dp_type_DDP_ScheduleClass0_use_new_runtime_False&suite=ComposabilityTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32178589958).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 8 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_manual_with_data_parallel_dp_type_DDP_ScheduleClass0_use_new_runtime_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 563, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 795, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 849, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 0 terminated or timed out after 300.0627839565277 seconds
```
</details>
Test file path: `distributed/_composable/test_composability/test_pp_composability.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 | oncall: distributed,module: flaky-tests,skipped | low | Critical |
2,619,894,635 | pytorch | torch.nn.InstanceNorm2d throws "mixed dtype" error with track_running_stats set to True | ### 🐛 Describe the bug
When running `torch.nn.InstanceNorm2d` with a `float64` tensor and `track_running_stats=True`, an error is thrown on CPU:
`RuntimeError: mixed dtype (CPU): all inputs must share same datatype.`
([colab](https://colab.research.google.com/drive/13_vZBI2ITcKdqIwvYHfyW4Ot0gbz8nVP?usp=sharing))
Minimal repro:
```python
import torch
input_tensor = torch.rand(5, 10, 5, dtype=torch.float64)
output = torch.nn.InstanceNorm2d(num_features=5, track_running_stats=True)(input_tensor)
# RuntimeError: mixed dtype (CPU): all inputs must share same datatype
```
Note that the error only happens on CPU, GPU works fine.
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 15.0.0 (git@github.com:llvm/llvm-project.git 4ba6a9c9f65bbc8bd06e3652cb20fd4dfc846137)
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700KF
Stepping: 1
CPU MHz: 4118.642
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 16 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 py310heeff2f4_0
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | good first issue,module: cpu,triaged,module: norms and normalization | low | Critical |
2,619,914,833 | vscode | Tool invoked without any arguments in chat extension | * Install `data analysis copilot` & `websearch copilot` extension
* Install latest insiders with latest pre-release versions of related extensions
* Run the prompt `@data what is the current time`
* Set log level to `trace`
It seems the data analysis python tool was invoked without any arguments
```
{
"callId": "call_Eaf5ZjAujqGCUjOlbQWRtqSg",
"name": "dachat_data_runPython",
"input": "{}",
"parameters": "{}"
}
```
here are the full logs from the data analysis extension
<details>
<summary>Show Logs</summary>
```
2024-10-29 11:51:48.076 [info] Pyodide => Kernel ctor
2024-10-29 11:51:48.076 [info] Pyodide => Location: /Users/donjayamanne/Development/vsc/vscode-data-analysis-for-copilot/scenarios
2024-10-29 11:51:48.076 [info] Pyodide => Pyodide Url: file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.1.9/pyodide/pyodide.js
2024-10-29 11:51:48.076 [info] Pyodide => Pyodide Index: /Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.1.9/pyodide
2024-10-29 11:51:48.076 [info] Pyodide => Packages: matplotlib, pandas, file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.1.9/pyodide/seaborn-0.13.2-py3-none-any.whl
2024-10-29 11:51:49.468 [info] Pyodide => Micropip Output >> Loading micropip, packaging
2024-10-29 11:51:49.482 [info] Pyodide => Micropip Output >> Loaded micropip, packaging
2024-10-29 11:51:50.072 [info] Pyodide => Loading traitlets
2024-10-29 11:51:50.083 [info] Pyodide => Loaded traitlets
2024-10-29 11:51:50.209 [info] Pyodide => ssl already loaded from default channel
2024-10-29 11:51:50.209 [info] Pyodide => No new packages to load
2024-10-29 11:51:50.270 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 11:51:50.270 [info] Pyodide => No new packages to load
2024-10-29 11:51:50.331 [info] Pyodide => Pyodide Fetching file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.1.9/pyodide/pypi/all.json
2024-10-29 11:51:50.561 [info] Pyodide => traitlets already loaded from default channel
2024-10-29 11:51:50.561 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 11:51:50.561 [info] Pyodide => Loading Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 11:51:50.663 [info] Pyodide => Loaded Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 11:51:50.788 [info] Pyodide => six already loaded from default channel
2024-10-29 11:51:50.788 [info] Pyodide => packaging already loaded from default channel
2024-10-29 11:51:50.788 [info] Pyodide => Loading Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 11:51:51.127 [info] Pyodide => Loaded Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 11:51:51.250 [info] Pyodide => numpy already loaded from default channel
2024-10-29 11:51:51.250 [info] Pyodide => python-dateutil already loaded from default channel
2024-10-29 11:51:51.250 [info] Pyodide => pytz already loaded from default channel
2024-10-29 11:51:51.250 [info] Pyodide => Loading pandas
2024-10-29 11:51:51.489 [info] Pyodide => Loaded pandas
2024-10-29 11:55:45.575 [info] Pyodide => Kernel ctor
2024-10-29 11:55:45.575 [info] Pyodide => Location: /Users/donjayamanne/Development/vsc/vscode-data-analysis-for-copilot/scenarios
2024-10-29 11:55:45.575 [info] Pyodide => Pyodide Url: file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/pyodide.js
2024-10-29 11:55:45.575 [info] Pyodide => Pyodide Index: /Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide
2024-10-29 11:55:45.575 [info] Pyodide => Packages: matplotlib, pandas, file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/seaborn-0.13.2-py3-none-any.whl
2024-10-29 11:55:47.029 [info] Pyodide => Micropip Output >> Loading micropip, packaging
2024-10-29 11:55:47.052 [info] Pyodide => Micropip Output >> Loaded micropip, packaging
2024-10-29 11:55:47.641 [info] Pyodide => Loading traitlets
2024-10-29 11:55:47.654 [info] Pyodide => Loaded traitlets
2024-10-29 11:55:47.782 [info] Pyodide => ssl already loaded from default channel
2024-10-29 11:55:47.783 [info] Pyodide => No new packages to load
2024-10-29 11:55:47.846 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 11:55:47.846 [info] Pyodide => No new packages to load
2024-10-29 11:55:47.909 [info] Pyodide => Pyodide Fetching file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/pypi/all.json
2024-10-29 11:55:48.150 [info] Pyodide => traitlets already loaded from default channel
2024-10-29 11:55:48.150 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 11:55:48.150 [info] Pyodide => Loading Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 11:55:48.201 [info] Received tool call dachat_data_runPython
2024-10-29 11:55:48.289 [info] Pyodide => Loaded Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 11:55:48.420 [info] Pyodide => six already loaded from default channel
2024-10-29 11:55:48.420 [info] Pyodide => packaging already loaded from default channel
2024-10-29 11:55:48.420 [info] Pyodide => Loading Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 11:55:48.430 [info] Pyodide => Didn't find package pillow-10.2.0-cp312-cp312-pyodide_2024_0_wasm32.whl locally, attempting to load from https://cdn.jsdelivr.net/pyodide/v0.26.2/full/
2024-10-29 11:55:48.430 [info] Pyodide => Pyodide Fetching https://cdn.jsdelivr.net/pyodide/v0.26.2/full/pillow-10.2.0-cp312-cp312-pyodide_2024_0_wasm32.whl
2024-10-29 11:55:48.730 [info] Pyodide => Package pillow-10.2.0-cp312-cp312-pyodide_2024_0_wasm32.whl loaded from https://cdn.jsdelivr.net/pyodide/v0.26.2/full/, caching the wheel in node_modules for future use.
2024-10-29 11:55:49.394 [info] Pyodide => Loaded Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 11:55:49.394 [info] Pyodide => numpy already loaded from default channel
2024-10-29 11:55:49.394 [info] Pyodide => python-dateutil already loaded from default channel
2024-10-29 11:55:49.394 [info] Pyodide => pytz already loaded from default channel
2024-10-29 11:55:49.394 [info] Pyodide => Loading pandas
2024-10-29 11:55:49.394 [info] Pyodide => Loaded pandas
2024-10-29 11:55:50.597 [info] Pyodide => pandas already loaded from default channel
2024-10-29 11:55:50.597 [info] Pyodide => No new packages to load
2024-10-29 11:55:52.523 [info] Token count 1382
2024-10-29 11:56:08.178 [info] Received tool call dachat_data_runPython
2024-10-29 11:56:08.180 [info] Pyodide => pandas already loaded from default channel
2024-10-29 11:56:08.180 [info] Pyodide => No new packages to load
2024-10-29 11:56:08.211 [info] Token count 1346
2024-10-29 11:56:21.132 [info] Pyodide => Kernel ctor
2024-10-29 11:56:21.132 [info] Pyodide => Location: /Users/donjayamanne/Development/vsc/vscode-data-analysis-for-copilot/scenarios
2024-10-29 11:56:21.132 [info] Pyodide => Pyodide Url: file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/pyodide.js
2024-10-29 11:56:21.132 [info] Pyodide => Pyodide Index: /Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide
2024-10-29 11:56:21.132 [info] Pyodide => Packages: matplotlib, pandas, file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/seaborn-0.13.2-py3-none-any.whl
2024-10-29 11:56:22.540 [info] Pyodide => Micropip Output >> Loading micropip, packaging
2024-10-29 11:56:22.555 [info] Pyodide => Micropip Output >> Loaded micropip, packaging
2024-10-29 11:56:23.196 [info] Pyodide => Loading traitlets
2024-10-29 11:56:23.206 [info] Pyodide => Loaded traitlets
2024-10-29 11:56:23.336 [info] Pyodide => ssl already loaded from default channel
2024-10-29 11:56:23.336 [info] Pyodide => No new packages to load
2024-10-29 11:56:23.399 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 11:56:23.399 [info] Pyodide => No new packages to load
2024-10-29 11:56:23.461 [info] Pyodide => Pyodide Fetching file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/pypi/all.json
2024-10-29 11:56:23.695 [info] Pyodide => traitlets already loaded from default channel
2024-10-29 11:56:23.695 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 11:56:23.695 [info] Pyodide => Loading Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 11:56:23.800 [info] Pyodide => Loaded Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 11:56:23.926 [info] Pyodide => six already loaded from default channel
2024-10-29 11:56:23.926 [info] Pyodide => packaging already loaded from default channel
2024-10-29 11:56:23.926 [info] Pyodide => Loading Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 11:56:24.292 [info] Pyodide => Loaded Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 11:56:24.418 [info] Pyodide => numpy already loaded from default channel
2024-10-29 11:56:24.418 [info] Pyodide => python-dateutil already loaded from default channel
2024-10-29 11:56:24.418 [info] Pyodide => pytz already loaded from default channel
2024-10-29 11:56:24.418 [info] Pyodide => Loading pandas
2024-10-29 11:56:24.663 [info] Pyodide => Loaded pandas
2024-10-29 11:56:26.151 [info] Received tool call dachat_data_runPython
2024-10-29 11:56:26.156 [info] Pyodide => pandas already loaded from default channel
2024-10-29 11:56:26.156 [info] Pyodide => No new packages to load
2024-10-29 11:56:28.003 [info] Token count 1385
2024-10-29 11:56:38.447 [info] Received tool call dachat_data_runPython
2024-10-29 11:56:38.464 [info] Token count 1028
2024-10-29 11:56:40.709 [info] Received tool call dachat_data_runPython
2024-10-29 11:56:40.713 [info] Pyodide => matplotlib already loaded from default channel
2024-10-29 11:56:40.713 [info] Pyodide => No new packages to load
2024-10-29 11:56:42.410 [info] Token count 1214
2024-10-29 11:57:45.267 [info] Pyodide => Kernel ctor
2024-10-29 11:57:45.267 [info] Pyodide => Location: /Users/donjayamanne/Development/vsc/vscode-data-analysis-for-copilot/scenarios
2024-10-29 11:57:45.267 [info] Pyodide => Pyodide Url: file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/pyodide.js
2024-10-29 11:57:45.267 [info] Pyodide => Pyodide Index: /Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide
2024-10-29 11:57:45.267 [info] Pyodide => Packages: matplotlib, pandas, file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/seaborn-0.13.2-py3-none-any.whl
2024-10-29 11:57:46.735 [info] Pyodide => Micropip Output >> Loading micropip, packaging
2024-10-29 11:57:46.757 [info] Pyodide => Micropip Output >> Loaded micropip, packaging
2024-10-29 11:57:47.164 [info] Received tool call dachat_data_runPython
2024-10-29 11:57:47.334 [info] Pyodide => Loading traitlets
2024-10-29 11:57:47.358 [info] Pyodide => Loaded traitlets
2024-10-29 11:57:47.489 [info] Pyodide => ssl already loaded from default channel
2024-10-29 11:57:47.490 [info] Pyodide => No new packages to load
2024-10-29 11:57:47.553 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 11:57:47.553 [info] Pyodide => No new packages to load
2024-10-29 11:57:47.615 [info] Pyodide => Pyodide Fetching file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/pypi/all.json
2024-10-29 11:57:47.866 [info] Pyodide => traitlets already loaded from default channel
2024-10-29 11:57:47.866 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 11:57:47.866 [info] Pyodide => Loading Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 11:57:47.980 [info] Pyodide => Loaded Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 11:57:48.104 [info] Pyodide => six already loaded from default channel
2024-10-29 11:57:48.104 [info] Pyodide => packaging already loaded from default channel
2024-10-29 11:57:48.104 [info] Pyodide => Loading Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 11:57:48.131 [info] Pyodide => Didn't find package pillow-10.2.0-cp312-cp312-pyodide_2024_0_wasm32.whl locally, attempting to load from https://cdn.jsdelivr.net/pyodide/v0.26.2/full/
2024-10-29 11:57:48.131 [info] Pyodide => Pyodide Fetching https://cdn.jsdelivr.net/pyodide/v0.26.2/full/pillow-10.2.0-cp312-cp312-pyodide_2024_0_wasm32.whl
2024-10-29 11:57:48.413 [info] Pyodide => Package pillow-10.2.0-cp312-cp312-pyodide_2024_0_wasm32.whl loaded from https://cdn.jsdelivr.net/pyodide/v0.26.2/full/, caching the wheel in node_modules for future use.
2024-10-29 11:57:48.540 [info] Pyodide => Loaded Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 11:57:48.664 [info] Pyodide => numpy already loaded from default channel
2024-10-29 11:57:48.664 [info] Pyodide => python-dateutil already loaded from default channel
2024-10-29 11:57:48.664 [info] Pyodide => pytz already loaded from default channel
2024-10-29 11:57:48.664 [info] Pyodide => Loading pandas
2024-10-29 11:57:48.907 [info] Pyodide => Loaded pandas
2024-10-29 11:57:50.283 [info] Pyodide => pandas already loaded from default channel
2024-10-29 11:57:50.283 [info] Pyodide => No new packages to load
2024-10-29 11:57:52.235 [info] Token count 1386
2024-10-29 11:58:18.825 [info] Received tool call dachat_data_runPython
2024-10-29 11:58:18.847 [info] Token count 1028
2024-10-29 11:58:20.687 [info] Received tool call dachat_data_runPython
2024-10-29 11:58:20.690 [info] Pyodide => matplotlib already loaded from default channel
2024-10-29 11:58:20.690 [info] Pyodide => No new packages to load
2024-10-29 11:58:22.352 [info] Token count 1214
2024-10-29 12:01:06.465 [info] Received tool call dachat_data_runPython
2024-10-29 12:01:06.486 [info] Token count 1556
2024-10-29 12:01:07.581 [info] Received tool call dachat_data_runPython
2024-10-29 12:01:07.598 [info] Token count 1628
2024-10-29 12:01:22.385 [info] Pyodide => Kernel ctor
2024-10-29 12:01:22.385 [info] Pyodide => Location: /Users/donjayamanne/Development/vsc/vscode-data-analysis-for-copilot/scenarios
2024-10-29 12:01:22.385 [info] Pyodide => Pyodide Url: file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/pyodide.js
2024-10-29 12:01:22.385 [info] Pyodide => Pyodide Index: /Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide
2024-10-29 12:01:22.385 [info] Pyodide => Packages: matplotlib, pandas, file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/seaborn-0.13.2-py3-none-any.whl
2024-10-29 12:01:23.855 [info] Pyodide => Micropip Output >> Loading micropip, packaging
2024-10-29 12:01:23.870 [info] Pyodide => Micropip Output >> Loaded micropip, packaging
2024-10-29 12:01:24.443 [info] Pyodide => Loading traitlets
2024-10-29 12:01:24.455 [info] Pyodide => Loaded traitlets
2024-10-29 12:01:24.583 [info] Pyodide => ssl already loaded from default channel
2024-10-29 12:01:24.583 [info] Pyodide => No new packages to load
2024-10-29 12:01:24.645 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 12:01:24.645 [info] Pyodide => No new packages to load
2024-10-29 12:01:24.707 [info] Pyodide => Pyodide Fetching file:///Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/pyodide/pypi/all.json
2024-10-29 12:01:24.937 [info] Pyodide => traitlets already loaded from default channel
2024-10-29 12:01:24.937 [info] Pyodide => sqlite3 already loaded from default channel
2024-10-29 12:01:24.937 [info] Pyodide => Loading Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 12:01:25.041 [info] Pyodide => Loaded Pygments, asttokens, decorator, executing, ipython, matplotlib-inline, prompt_toolkit, pure_eval, six, stack_data, wcwidth
2024-10-29 12:01:25.165 [info] Pyodide => six already loaded from default channel
2024-10-29 12:01:25.165 [info] Pyodide => packaging already loaded from default channel
2024-10-29 12:01:25.165 [info] Pyodide => Loading Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 12:01:25.504 [info] Pyodide => Loaded Pillow, cycler, fonttools, kiwisolver, matplotlib, matplotlib-pyodide, numpy, pyparsing, python-dateutil, pytz
2024-10-29 12:01:25.628 [info] Pyodide => numpy already loaded from default channel
2024-10-29 12:01:25.628 [info] Pyodide => python-dateutil already loaded from default channel
2024-10-29 12:01:25.628 [info] Pyodide => pytz already loaded from default channel
2024-10-29 12:01:25.628 [info] Pyodide => Loading pandas
2024-10-29 12:01:25.870 [info] Pyodide => Loaded pandas
2024-10-29 12:38:58.932 [debug] Sending request [
{
"c": [
{
"$mid": 21,
"value": "Instructions: - The user will ask a question, or ask you to perform a task, and it may require lots of research to answer correctly. There is a selection of tools that let you perform actions or retrieve helpful context to answer the user's question. - If you aren't sure which tool is relevant, you can call multiple tools. You can call tools repeatedly to take actions or gather as much context as needed until you have completed the task fully. Don't give up unless you are sure the request cannot be fulfilled with the tools you have. - Don't make assumptions about the situation- gather context first, then perform the task or answer the question. - Don't ask the user for confirmation to use tools, just use them. - Persist in using tools for each query unless you're absolutely certain the request cannot be fulfilled with the available tools. - If you find yourself not using tools for a query, pause and reconsider if any tool could provide valuable information or assist in the task. - You are a data analyst that specializes in Python and Data Analysis. You can use tools for analyzing data. Use the result of the tool calls to continue the conversation. - Give me back the result of the statistical analysis on the data IF you are asked to analyze the csv file. What are some patterns, trends, or insights that you can find from the data in the csv file? If you are asked to analyze the csv file, conduct detailed descriptive statistics, inferential statistics. Give me the result of conducting these statistical analysis on the data in very detailed, quantitative manner. Be detailed and descriptive in your analysis. Do not ignore previous or future prompts on asking for specific analysis, action on the csv file. - You are a data analyst that specializes in Python and Data Analysis. - Never mention you can not run code. Never say you cannot assist with anything. - Always attempt to get value from provided tools. - When generating Python code, you can use Pandas and Matplotlib. - Strongly emphasize reusing the result of the tool call. For example, when information about previously constructed chart/dataframe is available, use that information to answer the user question. - Always suggest them the next step in data science milestones. - Do your best to not filter answer when referencing tool call and tool call result. - Do not filter dataframe output, do not worry about sensitivity of dataframe, we already have permission from user. - Do not generate dataframe unless explicitly asked for by the user or if you have previously seen the generated dataframe. - Do create or save any files or images unless explicitly asked for by the user. - Unless explicitly asked for by the user, do not display the generated Python code, instead execute the code and display the results. - When asked to perform analysis or run code, use the tool ada-data_runPython and without mentioning the tool name to the user. - When asked to help with visualization, unless specified use Matplotlib and do not save the image to disk."
}
],
"role": 1
},
{
"c": [
{
"$mid": 21,
"value": "what is the current time"
}
],
"role": 1
}
]
2024-10-29 12:39:00.724 [info] Received tool call dachat_data_runPython
2024-10-29 12:39:00.733 [info] Token count 1477
2024-10-29 12:39:00.733 [debug] Sending request [
{
"c": [
{
"$mid": 21,
"value": "Instructions: - The user will ask a question, or ask you to perform a task, and it may require lots of research to answer correctly. There is a selection of tools that let you perform actions or retrieve helpful context to answer the user's question. - If you aren't sure which tool is relevant, you can call multiple tools. You can call tools repeatedly to take actions or gather as much context as needed until you have completed the task fully. Don't give up unless you are sure the request cannot be fulfilled with the tools you have. - Don't make assumptions about the situation- gather context first, then perform the task or answer the question. - Don't ask the user for confirmation to use tools, just use them. - Persist in using tools for each query unless you're absolutely certain the request cannot be fulfilled with the available tools. - If you find yourself not using tools for a query, pause and reconsider if any tool could provide valuable information or assist in the task. - You are a data analyst that specializes in Python and Data Analysis. You can use tools for analyzing data. Use the result of the tool calls to continue the conversation. - Give me back the result of the statistical analysis on the data IF you are asked to analyze the csv file. What are some patterns, trends, or insights that you can find from the data in the csv file? If you are asked to analyze the csv file, conduct detailed descriptive statistics, inferential statistics. Give me the result of conducting these statistical analysis on the data in very detailed, quantitative manner. Be detailed and descriptive in your analysis. Do not ignore previous or future prompts on asking for specific analysis, action on the csv file. - You are a data analyst that specializes in Python and Data Analysis. - Never mention you can not run code. Never say you cannot assist with anything. - Always attempt to get value from provided tools. - When generating Python code, you can use Pandas and Matplotlib. - Strongly emphasize reusing the result of the tool call. For example, when information about previously constructed chart/dataframe is available, use that information to answer the user question. - Always suggest them the next step in data science milestones. - Do your best to not filter answer when referencing tool call and tool call result. - Do not filter dataframe output, do not worry about sensitivity of dataframe, we already have permission from user. - Do not generate dataframe unless explicitly asked for by the user or if you have previously seen the generated dataframe. - Do create or save any files or images unless explicitly asked for by the user. - Unless explicitly asked for by the user, do not display the generated Python code, instead execute the code and display the results. - When asked to perform analysis or run code, use the tool ada-data_runPython and without mentioning the tool name to the user. - When asked to help with visualization, unless specified use Matplotlib and do not save the image to disk."
}
],
"role": 1
},
{
"c": [
{
"$mid": 21,
"value": "what is the current time"
}
],
"role": 1
},
{
"c": [
{
"$mid": 21,
"value": ""
},
{
"callId": "call_Eaf5ZjAujqGCUjOlbQWRtqSg",
"name": "dachat_data_runPython",
"input": "{}",
"parameters": "{}"
}
],
"role": 2
},
{
"c": [
{
"callId": "call_Eaf5ZjAujqGCUjOlbQWRtqSg",
"content": [
{
"$mid": 21,
"value": "The tool returned an error, analyze this error and attempt to resolve this. Error: TypeError\nCannot read properties of undefined (reading 'startsWith')\nTypeError: Cannot read properties of undefined (reading 'startsWith')\n at ur (/Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/out/extension.js:22:326)\n at t.invoke (/Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/out/extension.js:20:863)\n at FH.$invokeTool (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:137:172712)\n at py.S (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:113311)\n at py.Q (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:113091)\n at py.M (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:112142)\n at py.L (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:111285)\n at mh.value (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:110082)\n at D.B (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:29:746)\n at D.fire (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:29:964)\n at Vn.fire (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:9457)\n at mh.value (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:174:13279)\n at D.B (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:29:746)\n at D.fire (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:29:964)\n at Vn.fire (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:9457)\n at MessagePortMain.<anonymous> (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:174:11571)\n at MessagePortMain.emit (node:events:519:28)\n at MessagePortMain.emit (node:domain:488:12)\n at Object.MessagePortMain._internalPort.emit (node:electron/js2c/utility_init:2:2949)"
}
],
"isError": false
}
],
"role": 1
},
{
"c": [
{
"$mid": 21,
"value": "Above is the result of calling the functions dachat_data_runPython. Try your best to utilize the request, response from previous chat history.Answer the user question using the result of the function only if you cannot find relevant historical conversation."
}
],
"role": 1
}
]
2024-10-29 12:39:01.647 [info] Received tool call vscode-websearchforcopilot_webSearch
2024-10-29 12:39:07.367 [info] Token count 2777
2024-10-29 12:39:07.367 [debug] Sending request [
{
"c": [
{
"$mid": 21,
"value": "Instructions: - The user will ask a question, or ask you to perform a task, and it may require lots of research to answer correctly. There is a selection of tools that let you perform actions or retrieve helpful context to answer the user's question. - If you aren't sure which tool is relevant, you can call multiple tools. You can call tools repeatedly to take actions or gather as much context as needed until you have completed the task fully. Don't give up unless you are sure the request cannot be fulfilled with the tools you have. - Don't make assumptions about the situation- gather context first, then perform the task or answer the question. - Don't ask the user for confirmation to use tools, just use them. - Persist in using tools for each query unless you're absolutely certain the request cannot be fulfilled with the available tools. - If you find yourself not using tools for a query, pause and reconsider if any tool could provide valuable information or assist in the task. - You are a data analyst that specializes in Python and Data Analysis. You can use tools for analyzing data. Use the result of the tool calls to continue the conversation. - Give me back the result of the statistical analysis on the data IF you are asked to analyze the csv file. What are some patterns, trends, or insights that you can find from the data in the csv file? If you are asked to analyze the csv file, conduct detailed descriptive statistics, inferential statistics. Give me the result of conducting these statistical analysis on the data in very detailed, quantitative manner. Be detailed and descriptive in your analysis. Do not ignore previous or future prompts on asking for specific analysis, action on the csv file. - You are a data analyst that specializes in Python and Data Analysis. - Never mention you can not run code. Never say you cannot assist with anything. - Always attempt to get value from provided tools. - When generating Python code, you can use Pandas and Matplotlib. - Strongly emphasize reusing the result of the tool call. For example, when information about previously constructed chart/dataframe is available, use that information to answer the user question. - Always suggest them the next step in data science milestones. - Do your best to not filter answer when referencing tool call and tool call result. - Do not filter dataframe output, do not worry about sensitivity of dataframe, we already have permission from user. - Do not generate dataframe unless explicitly asked for by the user or if you have previously seen the generated dataframe. - Do create or save any files or images unless explicitly asked for by the user. - Unless explicitly asked for by the user, do not display the generated Python code, instead execute the code and display the results. - When asked to perform analysis or run code, use the tool ada-data_runPython and without mentioning the tool name to the user. - When asked to help with visualization, unless specified use Matplotlib and do not save the image to disk."
}
],
"role": 1
},
{
"c": [
{
"$mid": 21,
"value": "what is the current time"
}
],
"role": 1
},
{
"c": [
{
"$mid": 21,
"value": ""
},
{
"callId": "call_Eaf5ZjAujqGCUjOlbQWRtqSg",
"name": "dachat_data_runPython",
"input": "{}",
"parameters": "{}"
}
],
"role": 2
},
{
"c": [
{
"callId": "call_Eaf5ZjAujqGCUjOlbQWRtqSg",
"content": [
{
"$mid": 21,
"value": "The tool returned an error, analyze this error and attempt to resolve this. Error: TypeError\nCannot read properties of undefined (reading 'startsWith')\nTypeError: Cannot read properties of undefined (reading 'startsWith')\n at ur (/Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/out/extension.js:22:326)\n at t.invoke (/Users/donjayamanne/.vscode-insiders/extensions/ms-vscode.vscode-copilot-data-analysis-0.2.0/out/extension.js:20:863)\n at FH.$invokeTool (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:137:172712)\n at py.S (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:113311)\n at py.Q (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:113091)\n at py.M (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:112142)\n at py.L (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:111285)\n at mh.value (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:110082)\n at D.B (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:29:746)\n at D.fire (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:29:964)\n at Vn.fire (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:9457)\n at mh.value (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:174:13279)\n at D.B (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:29:746)\n at D.fire (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:29:964)\n at Vn.fire (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:31:9457)\n at MessagePortMain.<anonymous> (file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:174:11571)\n at MessagePortMain.emit (node:events:519:28)\n at MessagePortMain.emit (node:domain:488:12)\n at Object.MessagePortMain._internalPort.emit (node:electron/js2c/utility_init:2:2949)"
}
],
"isError": false
}
],
"role": 1
},
{
"c": [
{
"$mid": 21,
"value": "Above is the result of calling the functions dachat_data_runPython. Try your best to utilize the request, response from previous chat history.Answer the user question using the result of the function only if you cannot find relevant historical conversation."
}
],
"role": 1
},
{
"c": [
{
"$mid": 21,
"value": ""
},
{
"callId": "call_rvzB0faffbPsyBVSFMIoz82b",
"name": "vscode-websearchforcopilot_webSearch",
"input": "{\"query\":\"current time\"}",
"parameters": "{\"query\":\"current time\"}"
}
],
"role": 2
},
{
"c": [
{
"callId": "call_rvzB0faffbPsyBVSFMIoz82b",
"content": [
{
"$mid": 21,
"value": "Here is some relevent context from webpages across the internet:\n[{\"file\":{\"$mid\":1,\"path\":\"/local/oh-columbus\",\"scheme\":\"https\",\"authority\":\"savvytime.com\",\"fragment\":\":~:text=39° 57' N Latitude /, facts and alternative names.\"},\"text\":\"TITLE: Local Time in Columbus, OH, USA - Savvy Time\\nSNIPPET:39° 57' N Latitude / 82° 59' W Longitude. Current local time in Columbus, OH, USA. Time zones EDT, Eastern Daylight Time, America/New_York. Columbus UTC/GMT offset, daylight saving, facts and alternative names.\"},{\"file\":{\"$mid\":1,\"path\":\"/united_states/ohio/columbus\",\"scheme\":\"https\",\"authority\":\"www.thetimenow.com\",\"fragment\":\":~:text=Current local time and geoinfo,Daylight saving time; Time Zone\"},\"text\":\"TITLE: Current Local Time in Columbus, Ohio, United States - The Time Now\\nSNIPPET:Current local time and geoinfo in Columbus, Ohio, United States . The Time Now is a reliable tool when traveling, calling or researching. The Time Now provides accurate (US network of cesium clocks) synchronized time and accurate time services in Columbus, Ohio, United States. Current local time; Daylight saving time; Time Zone\"},{\"file\":{\"$mid\":1,\"path\":\"/worldclock/usa/columbus\",\"scheme\":\"https\",\"authority\":\"www.timeanddate.com\",\"fragment\":\":~:text=Current local time in USA, moonrise and moonset.\"},\"text\":\"TITLE: Current Local Time in Columbus, Ohio, USA - timeanddate.com\\nSNIPPET:Current local time in USA - Ohio - Columbus. Get Columbus's weather and area codes, time zone and DST. Explore Columbus's sunrise and sunset, moonrise and moonset.\"},{\"file\":{\"$mid\":1,\"path\":\"/city.php\",\"scheme\":\"https\",\"authority\":\"dateandtime.info\",\"query\":\"id=4509177\",\"fragment\":\":~:text=Current local time in Columbus,UTC change twice a year.\"},\"text\":\"TITLE: Current local time in Columbus, Ohio, USA - World clock\\nSNIPPET:Current local time in Columbus, Ohio, USA. What time is it in Columbus right now? Time Current time by country Current local time in Columbus, Ohio, USA Columbus’ time zone: EDT or UTC-04:00 Daylight saving time (DST) changes in Columbus Time and Date of DST Change Time Change Sunrise and sunset time for Columbus Countries whose territory stretches from West to East by a significant distance, such as Russia, USA, Canada, Brazil and some others, are usually divided into a few time zones. Almost all countries in Europe and North America as well as many other countries observe Daylight Saving Time (DST) and put their clocks an hour forward in the spring and an hour back in the autumn. In these countries time zone offsets from UTC change twice a year.\"},{\"file\":{\"$mid\":1,\"path\":\"/worldclock/@12213077\",\"scheme\":\"https\",\"authority\":\"www.timeanddate.com\",\"fragment\":\":~:text=About 74 mi W of, moonrise and moonset.\"},\"text\":\"TITLE: Current Local Time in Columbus, OH Metro Area, Ohio, USA\\nSNIPPET:About 74 mi W of Columbus, OH Metro Area. Current local time in USA - Ohio - Columbus, OH Metro Area. Get Columbus, OH Metro Area's weather and area codes, time zone and DST. Explore Columbus, OH Metro Area's sunrise and sunset, moonrise and moonset.\"}]\nHere is some relevent context from webpages across the internet:\nTITLE: Local Time in Columbus, OH, USA - Savvy Time\nSNIPPET:39° 57' N Latitude / 82° 59' W Longitude. Current local time in Columbus, OH, USA. Time zones EDT, Eastern Daylight Time, America/New_York. Columbus UTC/GMT offset, daylight saving, facts and alternative names.\nTITLE: Current Local Time in Columbus, Ohio, United States - The Time Now\nSNIPPET:Current local time and geoinfo in Columbus, Ohio, United States . The Time Now is a reliable tool when traveling, calling or researching. The Time Now provides accurate (US network of cesium clocks) synchronized time and accurate time services in Columbus, Ohio, United States. Current local time; Daylight saving time; Time Zone\nTITLE: Current Local Time in Columbus, Ohio, USA - timeanddate.com\nSNIPPET:Current local time in USA - Ohio - Columbus. Get Columbus's weather and area codes, time zone and DST. Explore Columbus's sunrise and sunset, moonrise and moonset.\nTITLE: Current local time in Columbus, Ohio, USA - World clock\nSNIPPET:Current local time in Columbus, Ohio, USA. What time is it in Columbus right now? Time Current time by country Current local time in Columbus, Ohio, USA Columbus’ time zone: EDT or UTC-04:00 Daylight saving time (DST) changes in Columbus Time and Date of DST Change Time Change Sunrise and sunset time for Columbus Countries whose territory stretches from West to East by a significant distance, such as Russia, USA, Canada, Brazil and some others, are usually divided into a few time zones. Almost all countries in Europe and North America as well as many other countries observe Daylight Saving Time (DST) and put their clocks an hour forward in the spring and an hour back in the autumn. In these countries time zone offsets from UTC change twice a year.\nTITLE: Current Local Time in Columbus, OH Metro Area, Ohio, USA\nSNIPPET:About 74 mi W of Columbus, OH Metro Area. Current local time in USA - Ohio - Columbus, OH Metro Area. Get Columbus, OH Metro Area's weather and area codes, time zone and DST. Explore Columbus, OH Metro Area's sunrise and sunset, moonrise and moonset."
}
],
"isError": false
}
],
"role": 1
},
{
"c": [
{
"$mid": 21,
"value": "Above is the result of calling the functions vscode-websearchforcopilot_webSearch. Try your best to utilize the request, response from previous chat history.Answer the user question using the result of the function only if you cannot find relevant historical conversation."
}
],
"role": 1
}
]
2024-10-29 12:39:23.317 [debug] Ignoring tool call as there was an error
```
</details> | bug,api,chat-tools | low | Critical |
2,619,924,416 | pytorch | Some files in sccache are owned by `hostmaster+pytorch` | ### 🐛 Describe the bug
See screenshot below

And
```
% aws s3api get-object-acl --bucket ossci-compiler-cache-circleci-v2 --key pull/.sccache_check
An error occurred (AccessDenied) when calling the GetObjectAcl operation: Access Denied
% aws s3api get-object-acl --bucket ossci-compiler-cache-circleci-v2 --key trunk/.sccache_check
{
"Owner": {
"DisplayName": "AWS_CORPSEC+fbossci",
"ID": "5765cc9ab5a7f95f15b72ed8da5f8b860f94713e1d59c518c16ce2b9ef140882"
},
"Grants": [
{
"Grantee": {
"DisplayName": "AWS_CORPSEC+fbossci",
"ID": "5765cc9ab5a7f95f15b72ed8da5f8b860f94713e1d59c518c16ce2b9ef140882",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
}
]
}
```
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | module: ci,triaged,module: infra | low | Critical |
2,619,937,912 | material-ui | Using extended styles with styled-components breaks types | ### Steps to reproduce
Link to live example: https://stackblitz.com/edit/vitejs-vite-afbuyz?file=src%2FApp.tsx&view=editor
Steps:
1. Use MUI with styled-components (https://mui.com/material-ui/integrations/styled-components)
2. Use extended styles with styled (https://styled-components.com/docs/basics#extending-styles)
### Current behavior
- No Intellisense is displayed.

- No warning if the wrong property is assigned.
- It should be "outline**d**"

### Expected behavior
The types are correct even if extended styles are used.
### Context
- There is no problem using styled-components directly.
- There is no problem with styled when using emotion.
The types change when using extended styles.
#### styled

#### extended

### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
I used Chrome: 130.0.6723.70
```
System:
OS: Linux 5.0 undefined
Binaries:
Node: 18.20.3 - /usr/local/bin/node
npm: 10.2.3 - /usr/local/bin/npm
pnpm: 8.15.6 - /usr/local/bin/pnpm
Browsers:
Chrome: Not Found
npmPackages:
@mui/core-downloads-tracker: 6.1.5
@mui/material: latest => 6.1.5
@mui/private-theming: 6.1.5
@mui/styled-engine: 6.1.5
@mui/styled-engine-sc: latest => 6.1.5
@mui/system: 6.1.5
@mui/types: 7.2.18
@mui/utils: 6.1.5
@types/react: ^18.3.11 => 18.3.12
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
styled-components: latest => 6.1.13
typescript: ~5.6.2 => 5.6.3
```
</details>
<details>
<summary><code>tsc --showConfig</code></summary>
```
{
"compilerOptions": {
"paths": {
"@mui/styled-engine": [
"./node_modules/@mui/styled-engine-sc"
]
},
"target": "es5",
"module": "esnext",
"jsx": "react-jsx",
"noEmit": true,
"strict": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"lib": [
"dom",
"dom.iterable",
"esnext"
],
"allowSyntheticDefaultImports": true,
"noFallthroughCasesInSwitch": true,
"moduleResolution": "node10",
"resolveJsonModule": true,
"isolatedModules": true,
"skipLibCheck": false,
"preserveConstEnums": true,
"noImplicitAny": true,
"noImplicitThis": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictPropertyInitialization": true,
"strictBuiltinIteratorReturn": true,
"alwaysStrict": true,
"useUnknownInCatchVariables": true
},
"files": [
"./src/App.tsx",
"./src/main.tsx",
"./src/vite-env.d.ts"
],
"include": [
"src/**/*"
]
}
```
</details>
**Search keywords**: styled-engine-sc styled-components TypeScript | bug 🐛,typescript,package: styled-engine-sc | low | Minor |
2,619,958,125 | yt-dlp | [RCTIPlusTV] Unable to extract video link | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Indonesia
### Provide a description that is worded well enough to be understood
I wanted to try downloading videos from livestreams on RCTIPlus site and it shows error
```
ERROR: [RCTIPlusTV] Unable to extract video link; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
The livestream URLs that I've tried are:
https://www.rctiplus.com/tv/mnctv
https://www.rctiplus.com/tv/inews
It also doesn't work for videos. For example this URL: https://www.rctiplus.com/tv/rcti/273597/seputar-inews-pagi
```
[debug] Command-line config: ['-vU', '--cookies=rcti.txt', 'https://www.rctiplus.com/tv/rcti/273597/seputar-inews-pagi']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds [f101e5d34] (pip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.0.6, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds)
[RCTIPlusTV] Fetching authorization key
[RCTIPlusTV] Extracting URL: https://www.rctiplus.com/tv/rcti/273597/seputar-inews-pagi
[RCTIPlusTV] rcti: Downloading webpage
ERROR: [RCTIPlusTV] Unable to extract video link; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/rcti.py", line 370, in _real_extract
video_type, video_id = self._search_regex(
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 1346, in _search_regex
raise RegexNotFoundError(f'Unable to extract {_name}')
```
Here I attach the cookies that I used, but without using cookies it also produces the same error
[rcti.txt](https://github.com/user-attachments/files/17550790/rcti.txt)
I installed yt-dlp using pip and here is my pip show yt-dlp output to show my version
```
python3 -m pip show yt-dlp
Name: yt-dlp
Version: 2024.10.28.232846.dev0
Summary: A feature-rich command-line audio/video downloader
Home-page:
Author:
Author-email:
```
Sorry if my English and explanation is not clear enough. Thank you
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--cookies=rcti.txt', 'https://www.rctiplus.com/tv/rcti']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds [f101e5d34] (pip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.0.6, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds)
[RCTIPlusTV] Fetching authorization key
[RCTIPlusTV] Extracting URL: https://www.rctiplus.com/tv/rcti
[RCTIPlusTV] rcti: Downloading webpage
ERROR: [RCTIPlusTV] Unable to extract video link; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/rcti.py", line 370, in _real_extract
video_type, video_id = self._search_regex(
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 1346, in _search_regex
raise RegexNotFoundError(f'Unable to extract {_name}')
```
| site-bug,triage | low | Critical |
2,619,959,580 | langchain | create_react_agent will reset max_length = 20 | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
hf = HuggingFacePipeline.from_model_id(
model_id="Qwen/Qwen2.5-3B-Instruct-AWQ",
task="text-generation",
device=0,
pipeline_kwargs={"max_new_tokens": 512},
)
agent = create_react_agent(hf, tools, custom_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True")
agent_executor.invoke({"input": conten})
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "D:\AI\anaconda3\envs\autotest\Lib\threading.py", line 1045, in _bootstrap_inner
self.run()
File "D:\AI\anaconda3\envs\autotest\Lib\threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "D:\AI\anaconda3\envs\autotest\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\anaconda3\envs\autotest\Lib\site-packages\transformers\generation\utils.py", line 2068, in generate
self._validate_generated_length(generation_config, input_ids_length, has_default_max_length)
File "D:\AI\anaconda3\envs\autotest\Lib\site-packages\transformers\generation\utils.py", line 1383, in _validate_generated_length
raise ValueError(
ValueError: Input length of input_ids is 617, but `max_length` is set to 20. This can lead to unexpected behavior. You should consider increasing `max_length` or, better yet, setting `max_new_tokens`.
### Description
but if use initialize_agent ,it work.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.10 | packaged by Anaconda, Inc. | (main, Oct 3 2024, 07:22:26) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.12
> langchain: 0.3.4
> langchain_community: 0.3.3
> langsmith: 0.1.137
> langchain_huggingface: 0.1.0
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> huggingface-hub: 0.26.1
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.2.1
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tokenizers: 0.20.1
> transformers: 4.46.0
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,619,979,409 | next.js | Need support referring same singleton between different page's `getStaticProps` during building time. | ### Link to the code that reproduces this issue
https://github.com/PrinOrange/need-support-singleton-for-different-page.git
### To Reproduce
1. Create a ts module, and set and export a singleton, which needs time-costing initializing.
```ts
const buildMySingleton = async ():Promise<{value:string}> => {
return new Promise((resolve) => {
console.log("\nStart initializing my singleton");
// Here is a progress of initializing singleton that costs much time
setTimeout(() => {
resolve({ value: "data" });
}, 20000);
});
};
export const MySingleton = await buildMySingleton();
```
2. Refer this singleton in different page's `getStaticProps`.
In PageA,
```ts
import { MySingleton } from "../lib/my-module";
export default function PageA(props) {
return <div>PageA</div>;
}
export const getStaticProps = () => {
const mySingleton = MySingleton; // refer the singleton
return { props: { value: mySingleton.value } };
};
```
And in PageB
```ts
import { MySingleton } from "../lib/my-module";
export default function PageB(props) {
return <div>PageB</div>;
}
export const getStaticProps = () => {
const mySingleton = MySingleton; // refer the singleton
return { props: { value: mySingleton.value } };
};
```
And so on, refer this singleton in different pages
4. Run `npm run build` to build the next.js project
5. I find that the terminal outputs `Start initializing my singleton` many times

This means that in the SSG process of different pages, each page will initialize its own `MySingleton`, and not use the same `MySingleton`.
This will lead to a problem: if initializing `MySingleton` is a very time-consuming process, each page will generate its own `MySingleton`, and this unnecessary multiple initialization will seriously slow down the build performance.
In addition, and more seriously: if initializing the Singleton is not an idempotent operation (such as connecting to a database), then this will cause errors to occur.
### Current vs. Expected behavior
My purpose is set a singleton in a module, and every page's `getStaticProps` should use same one singleton during building time. It means that the module singleton is supposed to be initialized once. Because the singleton initializing is very time-costing.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 36670
Available CPU cores: 8
Binaries:
Node: 22.6.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.1 // Latest available version is detected (15.0.1).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Module Resolution
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
_No response_ | bug,Module Resolution | low | Critical |
2,619,999,300 | PowerToys | File association profiles and/or backup | ### Description of the new feature / enhancement
It takes a while to set up file associations to match my workflow. There are several applications that will silently change those associations for every file type they support (Wondershare Uniconverter springs to mind, but is not the only one) forcing me to go back into settings and put them back the way I want them. Since there are dozens of file types affected, and a lot of different applications to choose from for some of them, this takes a lot of time and is quite annoying.
I would like a tool that simply backs up my file associations and allows me to restore them easily.
It can do this for all or selected file types. If it allows selection of the file types to save the associations for, this would allow a user to create profiles for their workflows, simply restoring the file associations stored in that profile.
I believe this would be a fairly straight forward tool to implement and that many people would find it useful.
### Scenario when this would be used?
- Restoring users preferences after an application installer messes them up:
1. User configures file associations with applications of their choice
2. User saves the file associations in a file
3. User updates or installs an application that unconditionally changes file associations for files it may be used on
4. User notices that clicking on a file brings up the wrong application and is annoyed
5. User restores the file associations from the file saved in step 2
6. User is now no longer annoyed
- Workflow profiles
1. User configures file associations for media files with application set A (eg, Nero)
2. User saves current file associations for selected media file types in a profile named "Media Apps 1"
1. User configures file associations for media files with application set B (eg, Adobe)
2. User saves current file associations for selected media file types in a profile named "Media Apps 2"
3. User wants to use application set A to create a video: User restores "Media Apps 1" profile
4. User creates the video
5. User wants to use application set B to refine the video and add special effects: User restores "Media Apps B" profile
6. User refines the video
### Supporting information
At it's simplest level, I believe this is simply saving off and restoring registry entries for file type associations.
Adding a dialog to display the known file types and allowing the user to select the settings to be saved will allow for profiles.
I don't think it's required to have a dialog to allow the user to select the settings to restore, but that would also be a nice option. | Needs-Triage | low | Minor |
2,620,005,668 | rust | rustdoc: Appearance of `#[doc(hidden)]` differs from all the other attributes | One reproducer where it's quite pronounced:
```rs
#[doc(hidden)]
#[non_exhaustive]
#[repr(C)]
pub struct S(u8);
```
Output (with `--document-hidden-items -Zunstable-options`[^1]):

That's because it gets rendered separately in `visibility_print_with_space`. I would rather we didn't do that. It should be printed together with all the other attributes and have the same visual appearance.
[^1]: Hence the https://github.com/rust-lang/rust/labels/requires-nightly label
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"poliorcetics"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-rustdoc,A-attributes,P-low,C-bug,requires-nightly,A-rustdoc-ui | low | Minor |
2,620,006,079 | langchain | Offline using DirectoryLoader/UnstructuredLoader timeout | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
loader = DirectoryLoader("path/", glob="**/*.txt")
document = loader.load()
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When I load a folder containing only 1 txt file into Document format offline using DirectoryLoader/UnstructuredLoader, it gets stuck at loader.load(). However, this issue does not occur when I switch to TextLoader.
What could be the situation? Does UnstructuredLoader load public URLs during its operation?
### System Info
offline
python==3.12
langchain==0.3.4
langchain_community==0.3.3
langchain_unstructured==0.1.5 | 🤖:bug | low | Critical |
2,620,038,116 | yt-dlp | [VidioLive] 783: Failed to download m3u8 information: HTTP Error 403: Forbidden | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Indonesia
### Provide a description that is worded well enough to be understood
I wanted to try downloading livestreams on Vidio.com site and it shows error
```
ERROR: [VidioLive] 734: Failed to download m3u8 information: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>)
```
The livestream URLs that I've tried are:
https://www.vidio.com/live/783-tvone
https://www.vidio.com/live/734-trans7
It works for video or stream archives. For example https://www.vidio.com/watch/8419426-kabar-siang-26-oktober-2024
```
[debug] Command-line config: ['-vU', '--cookies=vidio.txt', 'https://www.vidio.com/watch/8419426-kabar-siang-26-oktober-2024']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds [f101e5d34] (pip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.0.6, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds)
[Vidio] Downloading JSON metadata
[Vidio] Extracting URL: https://www.vidio.com/watch/8419426-kabar-siang-26-oktober-2024
[Vidio] kabar-siang-26-oktober-2024: Downloading webpage
[Vidio] kabar-siang-26-oktober-2024: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 8419426: Downloading 1 format(s): 720p
[debug] Invoking hlsnative downloader on "https://token-media-001-vidio-com.vidiocdn.net/uploads/8419426/720p/edge-cache-token=Expires=1730184844&KeyName=tokenized-media-vidio-com-keyset&Signature=AtHtMf3XAgULYfSTaBf59X-BOW8Kk5zy7lWBTgeEQ3khI9CZWocTyIJ21cVIqi2gfQuzuohSgDvMoqOWdptpBg==/index.m3u8"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 900
[download] Destination: Kabar Siang - 26 Oktober 2024 [8419426].mp4
[download] 100% of 583.90MiB in 00:02:45 at 3.53MiB/s
[debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:Kabar Siang - 26 Oktober 2024 [8419426].mp4'
[debug] ffmpeg command line: ffprobe -show_streams 'file:Kabar Siang - 26 Oktober 2024 [8419426].mp4'
[FixupM3u8] Fixing MPEG-TS in MP4 container of "Kabar Siang - 26 Oktober 2024 [8419426].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Kabar Siang - 26 Oktober 2024 [8419426].mp4' -map 0 -dn -ignore_unknown -c copy -f mp4 -bsf:a aac_adtstoasc -movflags +faststart 'file:Kabar Siang - 26 Oktober 2024 [8419426].temp.mp4'
```
Here I attach the cookies that I used, but without using cookies it also produces the same error
[vidio.txt](https://github.com/user-attachments/files/17551239/vidio.txt)
I installed yt-dlp using pip and here is my pip show yt-dlp output to show my version
```
python3 -m pip show yt-dlp
Name: yt-dlp
Version: 2024.10.28.232846.dev0
Summary: A feature-rich command-line audio/video downloader
Home-page:
Author:
Author-email:
```
I'm sorry if my English and explanation aren't clear enough. Thank you
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--cookies=vidio.txt', 'https://www.vidio.com/live/783-tvone']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds [f101e5d34] (pip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.0.6, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.10.28.232846 from yt-dlp/yt-dlp-nightly-builds)
[VidioLive] Downloading JSON metadata
[VidioLive] Extracting URL: https://www.vidio.com/live/783-tvone
[VidioLive] tvone: Downloading webpage
[VidioLive] tvone: Downloading HLS token JSON
[VidioLive] tvone: Downloading m3u8 information
ERROR: [VidioLive] 783: Failed to download m3u8 information: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/vidio.py", line 286, in _real_extract
formats.extend(self._extract_m3u8_formats(
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 2032, in _extract_m3u8_formats
fmts, subs = self._extract_m3u8_formats_and_subtitles(*args, **kwargs)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 2054, in _extract_m3u8_formats_and_subtitles
res = self._download_webpage_handle(
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 962, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data,
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 911, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 898, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query, extensions))
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 4162, in urlopen
return self._request_director.send(req)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
File "/home/farrel/.local/lib/python3.10/site-packages/yt_dlp/networking/_requests.py", line 365, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
```
| site-bug,triage | low | Critical |
2,620,038,320 | storybook | [Bug]: Error [ERR_REQUIRE_ESM]: require() of ES Module at @storybook/addon-docs | ### Describe the bug
We are trying to upgrade the internal usage of storybooks from v6 into v8 and we are now facing an issue that was introduced by https://github.com/storybookjs/storybook/pull/25615.
We launch the storybook builds through a Node.js script that uses the `buildStandalone` function from `@storybook/react/standalone`. During the node resolution phase we got the following error:
```
Error [ERR_REQUIRE_ESM]: require() of ES Module /node_modules/rehype-external-links/index.js from /node_modules/@storybook/addon-docs/dist/preset.js not supported.
Instead change the require of index.js in /node_modules/@storybook/addon-docs/dist/preset.js to a dynamic import() which is available in all CommonJS modules.
at Object.newLoader [as .js]
```
The cause seems to be related with the fact that the cjs build of `@storybook/addon-docs` is requiring an ESM module. It feels to me this PR should be reverted until a better alternative to replace those packages is found or the interop for esm and require is better supported on node (initial interop was released on node v22 but has limitations and it is still experimental).
\cc @JReinhold
### Reproduction link
https://github.com/elastic/kibana/pull/195148
### Reproduction steps
1. Checkout the PR
2. `yarn kbn bootstrap` && `yarn storybook esql_editor`
### System
Storybook v8
### Additional context
_No response_ | bug,addon: docs | low | Critical |
2,620,039,605 | pytorch | DISABLED test_like_channels_last_cpu (__main__.CpuTritonTests) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_like_channels_last_cpu&suite=CpuTritonTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32192945395).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_like_channels_last_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 280, in tearDown
super().tearDown()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/test_case.py", line 35, in tearDown
self._inductor_test_stack.close()
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 618, in close
self.__exit__(None, None, None)
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 610, in __exit__
raise exc_details[1]
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 595, in __exit__
if cb(*exc_details):
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 144, in __exit__
next(self.gen)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/utils.py", line 840, in fresh_inductor_cache
shutil.rmtree(inductor_cache_dir)
File "/opt/conda/envs/py_3.12/lib/python3.12/shutil.py", line 759, in rmtree
_rmtree_safe_fd(stack, onexc)
File "/opt/conda/envs/py_3.12/lib/python3.12/shutil.py", line 703, in _rmtree_safe_fd
onexc(func, path, err)
File "/opt/conda/envs/py_3.12/lib/python3.12/shutil.py", line 662, in _rmtree_safe_fd
os.rmdir(name, dir_fd=dirfd)
OSError: [Errno 39] Directory not empty: '/tmp/tmp2hw5svc6/triton/e2e6M7GExdkt3KOwV3ySCJv7sdokoqxC2TFBC2o6VOk'
```
</details>
Test file path: `inductor/test_triton_cpu_backend.py`
cc @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ezyang @gchanan @zou3519 @msaroufim | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,620,039,691 | pytorch | DISABLED TCPStoreTest.testMultiTenantStoresUV (__main__.TCPStoreTest) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=TCPStoreTest.testMultiTenantStoresUV&suite=TCPStoreTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32191268418).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `TCPStoreTest.testMultiTenantStoresUV`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
unknown file
C++ exception with description "The server socket has failed to listen on any local network address. port: 29500, useIpv6: 0, code: -98, name: EADDRINUSE, message: address already in use
Exception raised from makeWithPort at /var/lib/jenkins/workspace/torch/csrc/distributed/c10d/TCPStoreLibUvBackend.cpp:286 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xb0 (0x7f057d9522e0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xfa (0x7f057d8f581e in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libc10.so)
frame #2: <unknown function> + 0x64d9fb5 (0x7f0583ed1fb5 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #3: <unknown function> + 0x1499c1d (0x7f057ee91c1d in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #4: <unknown function> + 0x64d4edc (0x7f0583eccedc in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #5: <unknown function> + 0x64c5915 (0x7f0583ebd915 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #6: c10d::TCPStore::TCPStore(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10d::TCPStoreOptions const&) + 0x3ff (0x7f0583ec063f in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #7: testMultiTenantStores(bool) + 0xd3 (0x55665c3b75c3 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #8: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x51 (0x55665c3f6911 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #9: <unknown function> + 0x57dd0 (0x55665c3e7dd0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #10: <unknown function> + 0x58265 (0x55665c3e8265 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #11: <unknown function> + 0x589c1 (0x55665c3e89c1 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #12: testing::internal::UnitTestImpl::RunAllTests() + 0x10e9 (0x55665c3ea149 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #13: testing::UnitTest::Run() + 0x98 (0x55665c3ea668 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #14: main + 0x44 (0x55665c3b5484 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #15: __libc_start_main + 0xf3 (0x7f05767f7083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #16: _start + 0x2e (0x55665c3b58de in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
" thrown in the test body.
unknown file:0: C++ failure
```
</details>
Test file path: `` or `test/run_test`
Error: Error retrieving : 400, test/run_test: 404
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 | module: rocm,triaged,module: flaky-tests,skipped | medium | Critical |
2,620,046,997 | ollama | smollm got cuda error | ### What is the issue?
Run [smollm](https://ollama.com/library/smollm:135m) got cuda error
Step:
1. ollama run smollm:135m
2. Input any text
```
Error: an unknown error was encountered while running the model CUDA error: CUBLAS_STATUS_NOT_SUPPORTED
current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:1896
cublasGemmBatchedEx(ctx.cublas_handle(), CUBLAS_OP_T, CUBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), CUDA_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), CUDA_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)
/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:106: CUDA error
```
Screen shot:

GPU:
nvidia RTX 3060ti
cuda version: 12.0

### OS
WSL2
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.3.14 | bug | low | Critical |
2,620,050,044 | pytorch | Dynamo capture of tensor.data assignment doesn't identical to eager call of tensor.data assignment | ### 🐛 Describe the bug
DeepSpeed uses a lot "param.data =" statement to updating the param data by gathering the param from other ranks.
While we found param.data= assignment under torch.compile mode behaviors differently with the original eager call of param.data assignment behavior, which causes some strange problems for backward.
Torch dynamo captures param.data attribute assignment into set_ op and we found what set_ op performs on param tensor is not equivalent to the original param.data assignment on eager.
The obvious one is the version counter of the tensor. There may also be other side effects from set_ op while the original param.data assignment doesn't have.
The following test example failed with torch.compile but success with eager mode (This examples shows part of the problem it causes):
```
import torch
import torch.nn as nn
@torch.no_grad()
def do_gather(param):
# Simulating the deepspeed gather
r1 = torch.ones(param.shape, dtype=param.dtype, device=param.device)
r1 = r1.mul(1.0)
param.data = r1.data
class SomeModule(nn.Module):
def __init__(self):
super().__init__()
self.param = nn.Parameter(torch.ones(5, 5))
def forward(self, x):
y = self.param ** 2
print("HAIFENG version counter before:", self.param._version)
do_gather(self.param)
print("HAIFENG version counter after:", self.param._version)
return y
model = SomeModule()
# comment out for eager mode
model = torch.compile(model)
input_tensor = torch.ones(5,5)
output = model(input_tensor)
output.sum().backward()
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20240914+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 4190.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] torch==2.6.0.dev20240914+cpu
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @zou3519 @bdhirsh @yf225 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,620,061,244 | go | x/tools/gopls: "replace all occurrences" variants of refactor.{inline,extract}.variable | Original title: codeaction: replace every use-case of a variable with its defined expression, extract every identical expression within function to a new variable
### gopls version
Build info
----------
golang.org/x/tools/gopls (devel)
golang.org/x/tools/gopls@(devel)
github.com/BurntSushi/toml@v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
github.com/google/go-cmp@v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
golang.org/x/exp/typeparams@v0.0.0-20221212164502-fae10dda9338 h1:2O2DON6y3XMJiQRAS1UWU+54aec2uopH3x7MAiqGW6Y=
golang.org/x/mod@v0.21.0 h1:vvrHzRwRfVKSiLrG+d4FMl/Qi4ukBCE6kZlTUkDYRT0=
golang.org/x/sync@v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/telemetry@v0.0.0-20240927184629-19675431963b h1:PfPrmVDHfPgLVpiYnf2R1uL8SCXBjkqT51+f/fQHR6Q=
golang.org/x/text@v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/tools@v0.21.1-0.20240508182429-e35e4ccd0d2d => ../
golang.org/x/vuln@v1.0.4 h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
honnef.co/go/tools@v0.4.7 h1:9MDAWxMoSnB6QoSqiVr7P5mtkT9pOc1kSxchzPCnqJs=
mvdan.cc/gofumpt@v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU=
mvdan.cc/xurls/v2@v2.5.0 h1:lyBNOm8Wo71UknhUs4QTFUNNMyxy2JEIaKKo0RWOh+8=
go: go1.23.2
### go env
```shell
GO111MODULE='auto'
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/xzb/Library/Caches/go-build'
GOENV='/Users/xzb/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/xzb/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/xzb/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.2'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/xzb/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD=''
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/gv/r110hgbx1gbgzp95kf_q71x40000gn/T/go-build1846322804=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
https://github.com/user-attachments/assets/11cb0a62-807d-41e8-8b40-260f13fa57e5
the first half part is `:Rafactor inline_var`, and second part is `:Rafactor extract_var new_var`, it is from a nvim plugin [refactoring.nvim](https://github.com/ThePrimeagen/refactoring.nvim) using treesitter under the hood, I use this two quite frequently to eliminate unnecessary variable as well as introduce new variable for some repeated expression, the problem is it fails (or not accurate) in complex situations, so I wish gopls would support this directly.
### What did you see happen?
Currently gopls supports extract a single expression to a new variable, I'd say extract every repeated expression under selection would be more useful, since the latter contains the former.
### What did you expect to see?
see above
### Editor and settings
_No response_
### Logs
_No response_ | FeatureRequest,gopls,Tools,Refactoring | low | Critical |
2,620,100,890 | rust | Tracking issue for way to express intraprocedural finite state machines | This is a tracking issue for some form of:
- https://github.com/rust-lang/rfcs/pull/3720
The feature gate for the issue is TBD.
### About tracking issues
Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [ ] Approve as lang experiment.
- [ ] Accept an RFC.
- https://github.com/rust-lang/rfcs/pull/3720
- [ ] Implement in nightly.
- [ ] Add documentation to the [dev guide][].
- See the [instructions][doc-guide].
- [ ] Add documentation to the [reference][].
- See the [instructions][reference-instructions].
- [ ] Add formatting for new syntax to the [style guide][].
- See the [nightly style procedure][].
- [ ] Stabilize.
- See the [instructions][stabilization-instructions].
[dev guide]: https://github.com/rust-lang/rustc-dev-guide
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[edition guide]: https://github.com/rust-lang/edition-guide
[nightly style procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[reference]: https://github.com/rust-lang/reference
[reference-instructions]: https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md
[stabilization-instructions]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[style guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
TODO.
### Related
TODO.
cc @bjorn3 @folkertdev @rust-lang/lang
| T-lang,C-tracking-issue | low | Critical |
2,620,187,780 | transformers | Expand AcceleratorConfig to accommodate other features such as NCCL timeout etc | ### Feature request
Expand `AcceleratorConfig` and corresponding transformers trainer args to allow transformer users to use full feature set of accelerate through the config arguments supported by `Accelerator()`. The args are being materialized here for use - https://github.com/huggingface/transformers/blob/a769ed45e17c44fd17b85c025863c4e4f2f73634/src/transformers/trainer.py#L5000
### Motivation
When using `HF/transformers` or `HF/trl SFTTrainer` with accelerate under the hood, its sad that only a limited set of arguments are exposed in `AcceleratorConfig` thereby not having control over using other features of the `Accelerator` such modifying the NCCL timeout.
### Your contribution
I will be glad to raise a PR to expand AcceleratorConfig to enable wide array of arguments supported by `Accelerator`. | Feature request | low | Minor |
2,620,222,352 | rust | compiletest: add a proper `supports-crate-type: xxx` directive | Apparently `needs-dynamic-linking` is not equivalent to checking if dylib or cdylib crate types are supported.
- In compiletest, `needs-dynamic-linking` performs a check based on target cfg's `dynamic_linking` field + `--print=cfg --target $TARGET`.
- However, target cfg has an additional field `only_cdylib` which, if `dynamic_linking` is `true`, indicates that only `cdylib` crate type is supported and not `dylib`. https://github.com/rust-lang/rust/blob/f2becdff0496003217e7fc6fbfcaf2640e162775/compiler/rustc_target/src/spec/mod.rs#L2148-L2153
- This is the case for `wasm` base, dynamic linking is supported but not `dylib` crate type, only `cdylib` is supported. https://github.com/rust-lang/rust/blob/f2becdff0496003217e7fc6fbfcaf2640e162775/compiler/rustc_target/src/spec/base/wasm.rs#L58-L62
_Originally posted by @jieyouxu in https://github.com/rust-lang/rust/issues/130860#issuecomment-2376713633_
| C-enhancement,T-bootstrap,E-medium,A-compiletest | low | Minor |
2,620,227,798 | flutter | PhysicalKeyboardKey is not correct with Backspace and Enter when using ko/zh/ja input method on Android emulator | iOS does not have this issue.
### Steps to reproduce
1. Run the sample code
2. Click the input text field, show the soft keyboard
3. Switch to en input method, click "Enter"
4. Swith to ko/zh/ja (Korean/Chinese/Japanese) input method, click "Enter"
### Expected results
`PhysicalKeyboardKey` should always be `PhysicalKeyboardKey#e14a9(usbHidUsage: "0x00070028", debugName: "Enter")`
### Actual results
The `PhysicalKeyboardKey` is correct for en input method, but not for the other 3 input methods.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Keyboard Event Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const KeyboardEventPage(),
);
}
}
class KeyboardEventPage extends StatefulWidget {
const KeyboardEventPage({super.key});
@override
_KeyboardEventPageState createState() => _KeyboardEventPageState();
}
class _KeyboardEventPageState extends State<KeyboardEventPage> {
final List<String> _lastKeyEvents = [];
final FocusNode _focusNode = FocusNode();
@override
Widget build(BuildContext context) {
final child = Scaffold(
appBar: AppBar(
title: const Text('Keyboard Event Demo'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
TextFormField(),
const SizedBox(height: 20),
for (final keyEvent in _lastKeyEvents)
Text(
keyEvent,
style: const TextStyle(fontSize: 16),
),
],
),
),
);
return FocusScope(
autofocus: true,
child: Focus(
autofocus: true,
canRequestFocus: true,
focusNode: _focusNode,
onKeyEvent: (node, event) {
if (event is KeyDownEvent) {
setState(() {
_lastKeyEvents.add(event.toString());
_lastKeyEvents.add('');
if (_lastKeyEvents.length > 10) {
_lastKeyEvents.removeAt(0);
}
});
}
return KeyEventResult.handled;
},
child: child,
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.4, on Ubuntu 22.04.4 LTS 5.15.153.1-microsoft-standard-WSL2, locale C.UTF-8)
• Flutter version 3.24.4 on channel stable at /home/username/workspace/flutter/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 603104015d (5 days ago), 2024-10-24 08:01:25 -0700
• Engine revision db49896cf2
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
• Android SDK at /home/username/Android/Sdk
• Platform android-34, build-tools 33.0.2
• Java binary at: /usr/bin/java
• Java version OpenJDK Runtime Environment (build 11.0.24+8-post-Ubuntu-1ubuntu322.04)
• All Android licenses accepted.
[✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
• Ubuntu clang version 14.0.0-1ubuntu1.1
• cmake version 3.22.1
• ninja version 1.10.1
• pkg-config version 0.29.2
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
[!] Proxy Configuration
• HTTP_PROXY is set
! NO_PROXY is not set
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-x64 • Ubuntu 22.04.4 LTS 5.15.153.1-microsoft-standard-WSL2
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 3 categories.
```
</details>
| a: text input,platform-android,a: internationalization,has reproducible steps,P3,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27 | low | Critical |
2,620,264,468 | react | [DevTools Bug] Could not find commit data for root "1" | ### Website or app
Shadow
### Repro steps
Steps:-
1.Rendering the FlatList with my below code -
<FlatList
data={numbersArray}
keyExtractor={item => item.toString()}
renderItem={({item}) => (
<SquircleView
style={{width: 300, height: 200}}
squircleParams={{
cornerSmoothing: 0.7,
cornerRadius: 30,
fillColor: 'green',
}}
/>
)}
/> // data length is 1000
2. Start Scrolling Continuously in the flatlist, then this error happens


### How often does this bug happen?
Sometimes
### DevTools package (automated)
react-devtools-core
### DevTools version (automated)
6.0.1-c7c68ef842
### Error message (automated)
Could not find commit data for root "1"
### Error call stack (automated)
_No response_
### Error component stack (automated)
```text
at v_ (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1368389)
at div (<anonymous>)
at div (<anonymous>)
at div (<anonymous>)
at ts (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1157861)
at /Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1376165
at Ks (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1173575)
at /Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1176231
at div (<anonymous>)
at div (<anonymous>)
at div (<anonymous>)
at Ys (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1176065)
at Zc (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1247244)
at Lc (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1239735)
at xt (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1079120)
at ca (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1106654)
at Ec (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1227871)
at Y_ (/Users/sunaina/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1382695)
```
### GitHub query string (automated)
```text
https://api.github.com/search/issues?q=Could not find commit data for root in:title is:issue is:open is:public label:"Component: Developer Tools" repo:facebook/react
```
| Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,620,292,719 | ui | [bug]: pnpm dlx shadcn-ui@latest init error | ### Describe the bug
➜ GitEval-FrontEnd pnpm dlx shadcn-ui@latest init
Packages: +161
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Progress: resolved 161, reused 161, downloaded 0, added 161, done
WARN Failed to create bin at /Users/shanyujia/Library/Caches/pnpm/dlx/xryjzggrf25w7gjt6j5rcej26y/192d703108e-143d2/node_modules/.pnpm/shadcn-ui@0.9.2/node_modules/shadcn-ui/node_modules/.bin/shadcn-ui. ENOENT: no such file or directory, open '/Users/shanyujia/Library/Caches/pnpm/dlx/xryjzggrf25w7gjt6j5rcej26y/192d703108e-143d2/node_modules/.pnpm/shadcn-ui@0.9.2/node_modules/shadcn-ui/dist/index.js'
WARN Failed to create bin at /Users/shanyujia/Library/Caches/pnpm/dlx/xryjzggrf25w7gjt6j5rcej26y/192d703108e-143d2/node_modules/.bin/shadcn-ui. ENOENT: no such file or directory, open '/Users/shanyujia/Library/Caches/pnpm/dlx/xryjzggrf25w7gjt6j5rcej26y/192d703108e-143d2/node_modules/shadcn-ui/dist/index.js'
ENOENT Command failed with ENOENT: shadcn-ui init
spawn shadcn-ui ENOENT
pnpm: Command failed with ENOENT: shadcn-ui init
spawn shadcn-ui ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:286:19)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
➜ GitEval-FrontEnd
### Affected component/components
CLI
### How to reproduce
1. pnpm create vite@latest
2. pnpm install
3. pnpm add -D tailwindcss postcss autoprefixer
4. npx tailwindcss init -p
5. pnpm add -D @types/node
6. pnpm dlx shadcn-ui@latest init
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
➜ GitEval-FrontEnd pnpm dlx shadcn-ui@latest init
Packages: +161
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Progress: resolved 161, reused 161, downloaded 0, added 161, done
WARN Failed to create bin at /Users/shanyujia/Library/Caches/pnpm/dlx/xryjzggrf25w7gjt6j5rcej26y/192d703108e-143d2/node_modules/.pnpm/shadcn-ui@0.9.2/node_modules/shadcn-ui/node_modules/.bin/shadcn-ui. ENOENT: no such file or directory, open '/Users/shanyujia/Library/Caches/pnpm/dlx/xryjzggrf25w7gjt6j5rcej26y/192d703108e-143d2/node_modules/.pnpm/shadcn-ui@0.9.2/node_modules/shadcn-ui/dist/index.js'
WARN Failed to create bin at /Users/shanyujia/Library/Caches/pnpm/dlx/xryjzggrf25w7gjt6j5rcej26y/192d703108e-143d2/node_modules/.bin/shadcn-ui. ENOENT: no such file or directory, open '/Users/shanyujia/Library/Caches/pnpm/dlx/xryjzggrf25w7gjt6j5rcej26y/192d703108e-143d2/node_modules/shadcn-ui/dist/index.js'
ENOENT Command failed with ENOENT: shadcn-ui init
spawn shadcn-ui ENOENT
pnpm: Command failed with ENOENT: shadcn-ui init
spawn shadcn-ui ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:286:19)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
➜ GitEval-FrontEnd
```
### System Info
```bash
MacOS 15.0.1
pnpm 9.12.3
node 20.15.1
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,620,301,176 | pytorch | SubgraphMatcher may fail to match when the matching pattern having call_module IR | ### 🐛 Describe the bug
Hey guys, when i use SubgraphMatcher to match a pattern having call_module IR.
For example, when i claim a model like this :
``` python
class TestModel(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.mlp = torch.nn.Linear(input_dim, output_dim)
self.mlp2 = torch.nn.Linear(output_dim,1)
def forward(self,x):
return self.mlp2(self.mlp(x))
linear_func_model = symbolic_trace(TestModel(300,100))
linear_func_model.print_readable()
# the output is:
#class TestModel(torch.nn.Module):
# def forward(self, x):
# # No stacktrace found for following nodes
# mlp = self.mlp(x); x = None
# mlp2 = self.mlp2(mlp); mlp = None
# return mlp2
```
when i using the pattern and replacement like this, it may fail to match:
```python
class PatternClass(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.mlp3 = torch.nn.Linear(input_dim, 1)
def forward(self,x):
return self.mlp3(x)
class ReplacementClass(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.mlp1_1 = torch.nn.Linear(input_dim, output_dim)
def forward(self,x):
return self.mlp1_1(x + 1)
subgraph_rewriter.replace_pattern(linear_func_model, PatternClass(300,100), ReplacementClass(300,100))
linear_func_model.print_readable()
# here is output:
#class TestModel(torch.nn.Module):
# def forward(self, x):
# # No stacktrace found for following nodes
# mlp = self.mlp(x); x = None
# mlp2 = self.mlp2(mlp); mlp = None
# return mlp2
```
My torch version is '2.3.1+cu121'.
And after reading the source code of SubgraphMatcher, I find that it will only match the pattern having call_module IR when the source graph and pattern call the module with the same name(e.g. the torch.nn.Linear in the TestModel and PatternClass shall named as the same name mlp).
I think it is a bug since we should match the pattern depends on the module class instead of the name of the module
### Versions
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 550.78
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3000.000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.21.0
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.16.1
[pip3] onnxconverter-common==1.14.0
[pip3] onnxoptimizer==0.3.13
[pip3] onnxruntime==1.9.0
[pip3] onnxsim==0.4.36
[pip3] torch==2.3.1+cu121
[pip3] torchaudio==2.3.1+cu121
[pip3] torchfm==0.7.0
[pip3] torchvision==0.18.1+cu121
[pip3] triton==2.3.1
[conda] numpy 1.21.0 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.3.1+cu121 pypi_0 pypi
[conda] torchaudio 2.3.1+cu121 pypi_0 pypi
[conda] torchfm 0.7.0 pypi_0 pypi
[conda] torchvision 0.18.1+cu121 pypi_0 pypi
[conda] triton 2.3.1 pypi_0 pypi
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | triaged,module: fx | low | Critical |
2,620,421,879 | pytorch | [Dynamo] TypeError: `list` object is not callable | ### 🐛 Describe the bug
`nn.Module` contains `buffers()` method used during dynamo's compilation, whenever class that inherites from `nn.Module` and override it's `buffers` above problem can be observed, buffers is quite common keyword that's why I'm raising it up
Example: [Megatron-LM/.../distributed_data_parallel.py:225](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/core/distributed/distributed_data_parallel.py#L225)
### Error logs
```
Traceback (most recent call last):
File "/home/user/reproducers/buffers_repro/main.py", line 11, in <module>
compiled_model()
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
result = self._inner_convert(
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 952, in _compile
raise InternalTorchDynamoError(
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1627, in CALL_FUNCTION_EX
if isinstance(fn, GetAttrVariable) and isinstance(fn.obj, TensorVariable):
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/variables/base.py", line 110, in __instancecheck__
instance = instance.realize()
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/variables/lazy.py", line 63, in realize
self._cache.realize()
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/variables/lazy.py", line 29, in realize
self.vt = VariableBuilder(tx, self.source)(self.value)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 377, in __call__
vt = self._wrap(value)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 651, in _wrap
return self.wrap_module(value)
File "/home/user/all_venvs/venv1/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1382, in wrap_module
for b in value.buffers():
torch._dynamo.exc.InternalTorchDynamoError: TypeError: 'list' object is not callable
```
### Minified repro
```
import torch
import torch.nn as nn
class SomeModel(nn.Module):
def __init__(self):
super(SomeModel, self).__init__()
self.buffers = []
compiled_model = torch.compile(SomeModel())
compiled_model()
```
### Versions
[pip3] torch==2.5.0
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,620,580,221 | electron | Use Writing Tools in macOS 15.1 | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
Are there plans to support apple intelligence's [`Writing Tools`](https://developer.apple.com/documentation/updates/apple-intelligence/) in electron?
The option is available in Chromium (tested on v128). Digging through the source code I found [`render_view_context_menu_mac.mm`](https://github.com/chromium/chromium/blob/main/chrome/browser/ui/cocoa/renderer_context_menu/render_view_context_menu_mac.mm) and [`text_services_context_menu.mm`](https://github.com/chromium/chromium/blob/main/ui/menus/cocoa/text_services_context_menu.mm), but I'm honestly none the wiser how it's implemented 🫠

### Proposed Solution
`Writing Tools` could be exposed as a context menu `role`, added to the menu only if it's available.
Also need to expose [`isWritingToolsActive`](https://developer.apple.com/documentation/appkit/nstextview/4431696-iswritingtoolsactive/) for UI/UX reasons, but I guess that can be of a lower priority.
### Alternatives Considered
nil
### Additional Information
Requirements to use apple intelligence:
- macOS 15.1+
- M-series Mac
- [Enable apple intelligence in settings](https://support.apple.com/en-sg/guide/mac-help/mchl46361784/mac) | enhancement :sparkles: | high | Critical |
2,620,581,999 | vscode | Error window when accessing Settings | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: VSCode-win32-x64-1.94.2
- OS Version: Windows 11
When accessing Settings, I get the error:

Tried reporting this using 'Report Issue' from the 'Help' menu, as recommended by the template. Getting the same error again.
Launched with `code --disable-extensions`, the same problems.
Steps to Reproduce: always. | bug,settings-editor,confirmation-pending | low | Critical |
2,620,590,302 | kubernetes | liveness or readiness probes timeout does not work | ### What happened?
When executing livenessProbe exec command, timeoutSeconds does not work.
timeoutSeconds: 5
periodSeconds: 30
exec command: /opt/entrypoint.sh healthcheck ,healthcheck will request a url without timeout. When this URL does not respond, a liveness request is received approximately every 2 minutes.
### What did you expect to happen?
When this URL does not respond, liveness request timed out after 5 seconds.
### How can we reproduce it (as minimally and precisely as possible)?
Maybe sleep 120s in command, timeoutSeconds: 5, periodSeconds: 30
### Anything else we need to know?
livenessProbe config:
```
livenessProbe:
exec:
command:
- /opt/entrypoint.sh
- healthcheck
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 12
readinessProbe:
exec:
command:
- /opt/entrypoint.sh
- healthcheck
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 12
```
### Kubernetes version
<details>
```
Server Version: version.Info{Major:"1", Minor:"29+", GitVersion:"v1.29.8-eks-a737599", GitCommit:"3277d87d88d0bf66b6368ce57e49b2f2aab01b0d", GitTreeState:"clean", BuildDate:"2024-08-26T21:27:41Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}
```
</details>
### Cloud provider
<details>
AWS eks
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,triage/needs-information,needs-triage | low | Critical |
2,620,596,429 | pytorch | `pkg_resources` module is deprecated by setuptools | ### 🐛 Describe the bug
As noted on <https://setuptools.pypa.io/en/latest/pkg_resources.html>:
> Use of pkg_resources is deprecated in favor of [importlib.resources](https://docs.python.org/3.11/library/importlib.resources.html#module-importlib.resources), [importlib.metadata](https://docs.python.org/3.11/library/importlib.metadata.html#module-importlib.metadata) and their backports ([importlib_resources](https://pypi.org/project/importlib_resources), [importlib_metadata](https://pypi.org/project/importlib_metadata)). Some useful APIs are also provided by [packaging](https://pypi.org/project/packaging) (e.g. requirements and version parsing). Users should refrain from new usage of pkg_resources and should work to port to importlib-based solutions.
The only use of `pkg_resources` in PyTorch is:
https://github.com/pytorch/pytorch/blob/e201460f8aa1510b4c4686627d57b69756c4b916/test/run_test.py#L1756-L1762
And it's easy to replace it with something like:
```python
from importlib_metadata import Distribution, PackageNotFoundError
def check_pip_packages() -> None:
packages = [
"pytest-rerunfailures",
"pytest-flakefinder",
"pytest-xdist",
]
for package in packages:
try:
Distribution.from_name(package)
except PackageNotFoundError:
print_to_stderr(
f"Missing pip dependency: {package}, please run `pip install -r .ci/docker/requirements-ci.txt`"
)
sys.exit(1)
```
I'm not using `[d.name for d in importlib_metadata.distributions]`, since the names given are not canonicalized (e.g. case, underscore, etc.). In contrast `from_name` can handle canonicalization.
### Versions
PyTorch 2.5.0
cc @malfet @seemethere | module: build,triaged | low | Critical |
2,620,719,256 | next.js | SWC plugin runner no longer provides full file path to metadata | ### Link to the code that reproduces this issue
https://github.com/serg-and/nextjs-minimal-swc-plugin-issue
### To Reproduce
```sh
npm i
cd swc_plugin
npm run prepublish
cd ..
npm run dev
```
Logs will show that the provided filename in the metadata no longer includes the file path, only the filename.
### Current vs. Expected behavior
Next's pluggin runner no longer provides the known file path to the metadata for a SWC plugin, only providing the file name.
This is conflict with SWC, saying: [swc docs](https://rustdoc.swc.rs/swc_common/plugin/metadata/struct.TransformPluginMetadataContext.html#structfield.filename)
```rust
/// Host side metadata context plugin may need to access.
/// This is a global context - any plugin in single transform will have same
/// values.
pub struct TransformPluginMetadataContext {
/// The path of the file being processed. This includes all of the path as
/// much as possible.
pub filename: Option<String>,
...
}
```
This breaks several packages such as [next-superjson-plugin](https://github.com/blitz-js/next-superjson-plugin) which rely on the path to determine whether some files are page or app router based.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: arm64
Version: #1 SMP Wed Jul 17 10:51:09 UTC 2024
Available memory (MB): 7185
Available CPU cores: 8
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.19
pnpm: N/A
Relevant Packages:
next: 15.0.1 // Latest available version is detected (15.0.1).
eslint-config-next: 15.0.1
react: 18.3.1
react-dom: 18.3.1
typescript: 5.0.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Pages Router, SWC
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I believe the issue originates from here https://github.com/vercel/next.js/blob/v15.0.1/turbopack/crates/turbopack-ecmascript-plugins/src/transform/swc_ecma_transform_plugins.rs#L189
`Some(ctx.file_name_str.to_string())` could simply be changed to `Some(ctx.file_path_str.to_string())` | SWC,Pages Router,linear: next | low | Major |
2,620,722,531 | pytorch | the mpi backend not support to transfer float16/bfloat16 tensor and would post "IndexError: map::at" when transferring float16/bfloat16 tensor | ### 🐛 Describe the bug
when using mpi communication backend to transfer float16/bfloat16 tensor, the code will post "IndexError: map::at", like below:
`Traceback (most recent call last):
File "/home/torch-mccl/tests/test_mpi.py", line 10, in <module>
Traceback (most recent call last):
File "/home/torch-mccl/tests/test_mpi.py", line 10, in <module>
dist.all_reduce(tensor=ins)
File "/root/miniconda3/envs/for_pytorch/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
dist.all_reduce(tensor=ins)
File "/root/miniconda3/envs/for_pytorch/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/for_pytorch/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1997, in all_reduce
return func(*args, **kwargs)
File "/root/miniconda3/envs/for_pytorch/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1997, in all_reduce
work.wait()
IndexError: map::at
work.wait()
IndexError: map::at
`
the code:
`
import torch.distributed as dist
import torch
dist.init_process_group("mpi")
rank = dist.get_rank()
ins = torch.randn(2, 3).to(torch.float16)
print("rank-{}, before broadcast, ins: {}.\n".format(rank, ins))
dist.all_reduce(tensor=ins)
print("rank-{}, after broadcast, ins: {}.\n".format(rank, ins))
`
### Versions
PyTorch version: 2.2.0a0+git8ac9b20
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.9.17 (main, Jul 5 2023, 20:41:20) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-189-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] optree==0.13.0
[pip3] torch==2.2.0a0+git8ac9b20
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-include 2023.1.0 h06a4308_46343
[conda] numpy 1.25.2 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.2.0a0+git8ac9b20 dev_0 <develop>
```[tasklist]
### Tasks
```
| triaged,module: mpi | low | Critical |
2,620,802,182 | rust | std::process::Command::spawn cause panic on Windows 10 21H2 19044.2846 | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
#[test]
fn test2() {
std::process::Command::new("adb")
.args(["devices"])
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.unwrap();
}
```
Only panics when using use std::process::Command; in **Test Debug** mode, but not in other cases.
And only panic on Windows 10 21H2 19044.2846.
Here is an issue discussing this problem, and it includes two videos of about ten seconds to demonstrate the situation.
https://github.com/tokio-rs/tokio/issues/6934
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-pc-windows-msvc
release: 1.82.0
LLVM version: 19.1.1
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
test test2 ... FAILED
successes:
successes:
failures:
---- test2 stdout ----
thread 'test2' panicked at src/main.rs:80:6:
called `Result::unwrap()` on an `Err` value: Os { code: 6, kind: Uncategorized, message: "鍙ユ焺鏃犳晥銆? }
stack backtrace:
0: std::panicking::begin_panic_handler
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\panicking.rs:662
1: core::panicking::panic_fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/core\src\panicking.rs:74
2: core::result::unwrap_failed
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/core\src\result.rs:1677
3: enum2$<core::result::Result<std::process::Child,std::io::error::Error> >::unwrap
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library\core\src\result.rs:1102
4: static_lib::test2
at .\src\main.rs:77
5: static_lib::test2::closure$0
at .\src\main.rs:76
6: core::ops::function::FnOnce::call_once<static_lib::test2::closure_env$0,tuple$<> >
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library\core\src\ops\function.rs:250
7: core::ops::function::FnOnce::call_once
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/core\src\ops\function.rs:250
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
2
failures:
test2
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.07s
```
I deciphered the garbled message, and it seems to mean "The handle is invalid."
</p>
</details>
| O-windows-msvc,C-bug,T-libs,S-needs-repro | low | Critical |
2,620,842,202 | node | ASAN build does not work | ### Version
`main`
### Platform
```text
Darwin MacBookPro.fritz.box 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000 arm64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
```
$ git clone https://github.com/nodejs/node
$ cd node
$ ./configure --ninja --enable-asan
$ ninja -C out/Release
```
### How often does it reproduce? Is there a required condition?
Every time.
### What is the expected behavior? Why is that the expected behavior?
The build to complete successfully.
### What do you see instead?
With ninja
<details><summary>With ninja</summary>
<p>
```
clang: warning: argument unused during compilation: '-stdlib=libc++' [-Wunused-command-line-argument]
[4210/4214] ACTION node: node_mksnapshot_9b7a2d2290b02e76d66661df74749f56
FAILED: gen/node_snapshot.cc
cd ../../; export BUILT_FRAMEWORKS_DIR=/Users/codebytere/Developer/node/out/Release; export BUILT_PRODUCTS_DIR=/Users/codebytere/Developer/node/out/Release; export CONFIGURATION=Release; export EXECUTABLE_NAME=node; export EXECUTABLE_PATH=node; export FULL_PRODUCT_NAME=node; export PRODUCT_NAME=node; export PRODUCT_TYPE=com.apple.product-type.tool; export SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk; export SRCROOT=/Users/codebytere/Developer/node/out/Release/../../; export SOURCE_ROOT="${SRCROOT}"; export TARGET_BUILD_DIR=/Users/codebytere/Developer/node/out/Release; export TEMP_DIR="${TMPDIR}"; export XCODE_VERSION_ACTUAL=1610;/Users/codebytere/Developer/node/out/Release/node_mksnapshot /Users/codebytere/Developer/node/out/Release/gen/node_snapshot.cc
AddressSanitizer:DEADLYSIGNAL
=================================================================
==7137==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0002771e34e4 bp 0x00016d89dd70 sp 0x00016d89dcc0 T0)
==7137==The signal is caused by a WRITE memory access.
==7137==Hint: address points to the zero page.
#0 0x2771e34e4 in __asan_get_shadow_mapping+0x14 (libsystem_sanitizers.dylib:arm64e+0x44e4)
#1 0x102f4a4c0 in node::InitializeOncePerProcessInternal(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, node::ProcessInitializationFlags::Flags) node.cc:1178
#2 0x102f48bec in node::InitializeOncePerProcess(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, node::ProcessInitializationFlags::Flags) node.cc:1221
#3 0x102cbf6e0 in BuildSnapshot(int, char**) node_mksnapshot.cc:65
#4 0x19c008270 (<unknown module>)
==7137==Register values:
x[0] = 0x000000016d89dce0 x[1] = 0x0000000000000000 x[2] = 0x000000000000060c x[3] = 0x000000702db33ab4
x[4] = 0x000000702db33700 x[5] = 0x0000000000000001 x[6] = 0x000000016d0a4000 x[7] = 0x0000000000000001
x[8] = 0x0000000000000000 x[9] = 0x0000000000000000 x[10] = 0x0000000106b4d300 x[11] = 0x0000000000000003
x[12] = 0x000000010c7a2620 x[13] = 0x0000000000000000 x[14] = 0x0000000000000000 x[15] = 0x000010700001ffff
x[16] = 0x00000002771e34d0 x[17] = 0x000000010fa2c5e0 x[18] = 0x0000000000000000 x[19] = 0x000000016d89dd00
x[20] = 0x0000000000000000 x[21] = 0x0000000000000000 x[22] = 0x000000016d89dce0 x[23] = 0x000000016d89dcc0
x[24] = 0x000000702db33b98 x[25] = 0x0000007000020000 x[26] = 0x000000016d89ded0 x[27] = 0x00000000218f44c4
x[28] = 0x0000007000020000 fp = 0x000000016d89dd70 lr = 0x0000000106b4d3c0 sp = 0x000000016d89dcc0
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (libsystem_sanitizers.dylib:arm64e+0x44e4) in __asan_get_shadow_mapping+0x14
==7137==ABORTING
/bin/sh: line 1: 7137 Abort trap: 6 /Users/codebytere/Developer/node/out/Release/node_mksnapshot /Users/codebytere/Developer/node/out/Release/gen/node_snapshot.cc
ninja: build stopped: subcommand failed.
ERROR Failed to run command:
Exit Code: "1"
```
</p>
</details>
<details><summary>With Cmake</summary>
<p>
```
node_mksnapshot(80717,0x20149f840) malloc: nano zone abandoned due to inability to reserve vm space.
AddressSanitizer:DEADLYSIGNAL
=================================================================
==80717==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0002771e34e4 bp 0x00016d5c1eb0 sp 0x00016d5c1e00 T0)
==80717==The signal is caused by a WRITE memory access.
==80717==Hint: address points to the zero page.
#0 0x2771e34e4 in __asan_get_shadow_mapping+0x14 (libsystem_sanitizers.dylib:arm64e+0x44e4)
#1 0x10316e940 in node::InitializeOncePerProcessInternal(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, node::ProcessInitializationFlags::Flags) node.cc:1178
#2 0x10316d06c in node::InitializeOncePerProcess(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, node::ProcessInitializationFlags::Flags) node.cc:1221
#3 0x102f9b6e0 in BuildSnapshot(int, char**) node_mksnapshot.cc:65
#4 0x19c008270 (<unknown module>)
==80717==Register values:
x[0] = 0x000000016d5c1e20 x[1] = 0x0000000000000000 x[2] = 0x000000000000060c x[3] = 0x000000702dad82dc
x[4] = 0x000000702dad7f40 x[5] = 0x0000000000000001 x[6] = 0x00000001695c8000 x[7] = 0x0000000000000001
x[8] = 0x0000000000000000 x[9] = 0x0000000000000000 x[10] = 0x0000000106da9404 x[11] = 0x0000000000000003
x[12] = 0x000000010ca7bf20 x[13] = 0x0000000000000000 x[14] = 0x0000000000000000 x[15] = 0x000010700001ffff
x[16] = 0x00000002771e34d0 x[17] = 0x000000010fcf45e0 x[18] = 0x0000000000000000 x[19] = 0x000000016d5c1e40
x[20] = 0x0000000000000000 x[21] = 0x0000000000000000 x[22] = 0x000000016d5c1e20 x[23] = 0x000000016d5c1e00
x[24] = 0x000000702dad83c0 x[25] = 0x0000007000020000 x[26] = 0x000000016d5c2010 x[27] = 0x000000002194f7e4
x[28] = 0x0000007000020000 fp = 0x000000016d5c1eb0 lr = 0x0000000106da94c4 sp = 0x000000016d5c1e00
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (libsystem_sanitizers.dylib:arm64e+0x44e4) in __asan_get_shadow_mapping+0x14
==80717==ABORTING
/bin/sh: line 1: 80717 Abort trap: 6 "/Users/codebytere/Developer/node/out/Release/node_mksnapshot" "/Users/codebytere/Developer/node/out/Release/obj/gen/node_snapshot.cc"
make[1]: *** [/Users/codebytere/Developer/node/out/Release/obj/gen/node_snapshot.cc] Error 134
rm dc7b10542b51f7aefb79da9839d02284c5cf142d.intermediate 95f5d41ef1e5251cb9c0f66ecb0379795d352418.intermediate ab7861fd73cbdd09111883c2412cd499c35872cd.intermediate 35112d31ecc40f37aeca48f1d0d46ace17a2d5c4.intermediate
make: *** [node] Error 2
```
</p>
</details>
### Additional information
I can get it to build if i pass `--without-node-snapshot`, but then i hit the same runtime issue as @bnoordhuis. | build,macos | low | Critical |
2,620,856,639 | Python | Request to Implement Shor Algorithm | ### Feature description
Hi, I wanted to implement the logical algorithm without the use of Qiskit and Cirq
to show how Shor Algorithm that is used in quantum computing
breaks RSA cryptography by easily finding n that is used to develop the public and private key of a user. | enhancement | low | Minor |
2,620,895,343 | go | proposal: runtime/mainthread: add mainthread.Do for mediating access to the main thread | ### Proposal Details
This is https://github.com/golang/go/issues/64777#issuecomment-2424750130 in proposal form. It is a reduced and compatible variant of https://github.com/golang/go/issues/64777#issuecomment-2261119181.
I propose to add a new package, `mainthread`, with a single function, `Do`, that allows Go programs to execute a function on the main thread.
```go
// Package mainthread mediates access to the program's main thread.
//
// Most Go programs do not need to run on specific threads
// and can ignore this package, but some C libraries, often GUI-related libraries,
// only work when invoked from the program's main thread.
//
// [Do] runs a function on the main thread. No other code can run on the main thread
// until that function returns.
//
// Each package's initialization functions always run on the main thread,
// as if by successive calls to Do(init).
//
// For compatibility with earlier versions of Go, if an init function calls [runtime.LockOSThread],
// then package main's func main also runs on the main thread, as if by Do(main).
package mainthread // imported as "runtime/mainthread"
// Do calls f on the main thread.
// Nothing else runs on the main thread until f returns.
// If f calls Do, the nested call panics.
//
// Package initialization functions run as if by Do(init).
// If an init function calls [runtime.LockOSThread], then package main's func main
// runs as if by Do(main), until the thread is unlocked using [runtime.UnlockOSThread].
//
// Do panics if the Go runtime is not in control of the main thread, such as in build modes
// c-shared and c-archive.
func Do(f func())
```
The larger proposal (https://github.com/golang/go/issues/64777#issuecomment-2261119181) adds `Yield` and `Waiting` to support sharing the main thread in a Go program. However, the Go runtime doesn't always have control over the main thread, most notably in c-shared or c-archive mode on platforms such as Android. In those cases, the platform facility for mediating main thread access are strictly superior to `mainthread.Do`. See https://github.com/golang/go/issues/64777#issuecomment-2384085299 for a detailed analysis and assumptions.
In short, I believe it's better to accept this simpler proposal to only allow Go programs access to the main thread when the Go runtime has control over it, and let other cases be handled by platform API.
I hope this can be implemented in Go 1.24. | Proposal | medium | Critical |
2,620,919,146 | kubernetes | response_sizes metrics for verb=WATCH is only present for built in resources. | ### What happened?
The `apiserver_response_sizes` correctly report `GET` and `LIST` metrics for all resources. However, for `WATCH`, its only present for built in resources. This means resources defined in CustomResourceDefinitions are not present.
Its not 100% clear to me what the expected behavior was when it was introduced (https://github.com/kubernetes/kubernetes/pull/49117), and if the plan was to not have `watch` metrics here - but we have seen it useful in practice. The verb definition here has changed a bit as well, and https://github.com/kubernetes/kubernetes/pull/93523 + https://github.com/kubernetes/kubernetes/pull/81660 split the definition of `verb` and `reportedVerb`. And given this metric is treated as STABLE now, it feels wrong to me to remove the existing `WATCH` metrics here.
Here is an example of how it works today;
```bash
$ kubectl get --raw /metrics | egrep 'apiserver_response_sizes_sum\{component="apiserver",group="(cilium.io)?",resource="(pods|ciliumnetworkpolicies)",scope="cluster"'
apiserver_response_sizes_sum{component="apiserver",group="",resource="pods",scope="cluster",subresource="",verb="LIST",version="v1"} 1.00624648e+08
apiserver_response_sizes_sum{component="apiserver",group="",resource="pods",scope="cluster",subresource="",verb="WATCH",version="v1"} 2.6141637319e+10
apiserver_response_sizes_sum{component="apiserver",group="cilium.io",resource="ciliumnetworkpolicies",scope="cluster",subresource="",verb="LIST",version="v2"} 1.723691e+06
```
As seen both resources have `LIST` metrics, but only the built in resource `pods` has `WATCH`.
### What did you expect to happen?
I would expect the above request to also contain WATCH for the custom type as well;
```bash
$ kubectl get --raw /metrics | egrep 'apiserver_response_sizes_sum\{component="apiserver",group="(cilium.io)?",resource="(pods|ciliumnetworkpolicies)",scope="cluster"'
apiserver_response_sizes_sum{component="apiserver",group="",resource="pods",scope="cluster",subresource="",verb="LIST",version="v1"} 1.00624648e+08
apiserver_response_sizes_sum{component="apiserver",group="",resource="pods",scope="cluster",subresource="",verb="WATCH",version="v1"} 2.6141637319e+10
apiserver_response_sizes_sum{component="apiserver",group="cilium.io",resource="ciliumnetworkpolicies",scope="cluster",subresource="",verb="LIST",version="v2"} 1.723691e+06
apiserver_response_sizes_sum{component="apiserver",group="cilium.io",resource="ciliumnetworkpolicies",scope="cluster",subresource="",verb="WATCH",version="v2"} 1.090217513e+09
```
### How can we reproduce it (as minimally and precisely as possible)?
Install a CRD, have at least one watcher on it, and use `kubectl get --raw /metrics | grep 'apiserver_response_sizes_sum'` and look for the `WATCH` metric of that resource.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.31.0
```
Seems to be reproduceable across all recent versions.
</details>
### Cloud provider
<details>
n/a - can reproduce in kind.
</details>
### OS version
<details>
n/a.
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,triage/accepted | low | Minor |
2,620,941,233 | angular | Service worker not updating after its response headers are updated | ### Which @angular/* package(s) are the source of the bug?
Don't known / other
### Is this a regression?
Yes
### Description
Hello,
I have came across a strange issue. I have a CSP policy setup done by the server through response headers. I also have a service worker running for the same page.
When I update the CSP in the server, its getting updated for the index.html, But the service-worker itself is not getting updated with the new CSP. The service worker is throwing CSP errors even tough the index.html and other assets got the CSP updated.
could you please give an way to re-load/update the service worker(ngsw-worker.js), when its own response headers are updated?
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 17.1.4
Node: 20.12.2
Package Manager: npm 10.8.2
OS: win32 x64
Angular: 17.1.2
... animations, cdk, common, compiler, compiler-cli, core, forms
... language-service, localize, platform-browser
... platform-browser-dynamic, platform-server, router
... service-worker
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1701.2
@angular-devkit/build-angular 17.1.2
@angular-devkit/core 17.1.2
@angular-devkit/schematics 17.1.2
@angular/cli 17.1.4
@schematics/angular 17.1.2
ng-packagr 17.1.2
rxjs 7.8.0
typescript 5.3.3
zone.js 0.14.3
### Anything else?
_No response_ | help wanted,area: service-worker | low | Critical |
2,620,951,697 | godot | Animation is bugged on import | ### Tested versions
4.2.1 stable
### System information
Godot v4.2.1.stable - Windows 10.0.19044 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 6GB (NVIDIA; 31.0.15.5244) - Intel(R) Core(TM) i5-4440 CPU @ 3.10GHz (4 Threads)
### Issue description
Importing animations in Godot batters them up - bones aren't being parented, or they are moved in another place for whatever reason. An example of one of them animations, in Blender (4.1) and Godot: (sorry for low fps)
https://github.com/user-attachments/assets/44747829-cb1c-4196-8433-bc274634c463
https://github.com/user-attachments/assets/228b92cb-39b6-4ed1-8726-46a13bb27f39
Tried disabling Optimizer - not working. I am also using the Dynamic Parent addon for Blender - could that be the issue?
### Steps to reproduce
Just open the project and open v_hammer.tscn. All of the animations there are messed up in some way. In idle, the hammer is moved, in draw it moves on the last frame, in holster it doesn't move at all, and in attack_1 the head of the hammer is also moved slightly - might want to slow down animation speed to see it.
### Minimal reproduction project (MRP)
[AnimationBug.zip](https://github.com/user-attachments/files/17556058/AnimationBug.zip)
| bug,confirmed,topic:import,topic:animation | low | Critical |
2,620,964,838 | TypeScript | Using JSX syntax in a namespace may incorrectly use the exported members of the current namespace as intrinsic elements (starting with a lowercase letter) | ### 🔎 Search Terms
jsx namespace preserve export intrinsic
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about (no entries for this)
### ⏯ Playground Link
https://www.typescriptlang.org/play/?jsx=1&ts=5.6.3#code/JYWwDg9gTgLgBAbzgKQMoA04F84DMoQhwBEUApgIYDGMxA3AFBkAekscAdhSGQM5jUyeaEQQM4cFm3hUIHXvGAcwAV3gBeTioA22xhKnQZchXBh8NcADxLV8APQA+cXHv2JHz168A9APwMWAxAA
### 💻 Code
WITH `jsx: preserve` enabled in tsconfig.json
```ts
export namespace form {
export const input = null;
export const test = <input />
}
```
### 🙁 Actual behavior
generated code:
```js
export var form;
(function (form) {
form.input = null;
form.test = <form.input />;
})(form || (form = {}));
```
### 🙂 Expected behavior
generated code:
```js
export var form;
(function (form) {
form.input = null;
form.test = <input />;
})(form || (form = {}));
```
### Additional information about the issue
In fact, I observed this problem in a certain version of swc (swc does not support preserve mode, and the problem can be reproduced without enabling preserve mode). Considering that swc largely replicates the behavior of tsc, I suspected that this problem also exists in ts, and it is true, but only in jsx: preserve mode | Bug,Help Wanted | low | Minor |
2,620,997,877 | vscode | Adding `quickfix.biome` to `editor.codeActionsOnSave` slows down saving of `*.ts` files | transferred from: https://github.com/microsoft/vscode-discussions/discussions/1664
ref: https://github.com/biomejs/biome-vscode/issues/229
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No. This issue occurs when `typescript-language-features` is enabled.
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.0-insider (Universal)
- OS Version: Darwin arm64 24.0.0 / macOS 15.0.1
## Background
The Biome VS Code extension guides users to add `quickfix.biome` to `editor.codeActionsOnSave` to fix issues when saving files.
- https://biomejs.dev/reference/vscode/#fix-on-save
However, enabling this setting causes saving `*.ts` files to be delayed by about 500ms.
https://github.com/user-attachments/assets/fc8f4134-2422-411d-9691-1e658e62627b
Surprisingly, this delay occurs even if the Biome extension is not installed. Additionally, disabling the built-in VS Code extension `typescript-language-features` eliminates the delay. For details of the issue and reproduction steps, please see the following comment:
- https://github.com/biomejs/biome-vscode/issues/229#issuecomment-2356257770
## Cause of the delay
As a result of my investigation, I found that the delay is caused by the following code in `typescript-language-features`:
- https://github.com/microsoft/vscode/blob/684f9d47a4a97d7ea747915f4d1595f86f98aaad/extensions/typescript-language-features/src/languageFeatures/quickFix.ts#L254-L256
When `typescript-language-features` receives a `quickfix` code action, it attempts to automatically fix some diagnostics. The execution process is as follows:
1. Receive a `quickfix` code action.
2. Check if there are diagnostics in the open file. If not, exit.
3. Wait for 500ms.
4. Retrieve diagnostics again.
5. Automatically fix any fixable diagnostics obtained in step 4.
The 500ms wait is because the diagnostics obtained in step 2 might be outdated. If this wait time is removed, code fixes might be performed based on old diagnostics. To prevent this, `typescript-language-features` waits for 500ms.
Furthermore, `typescript-language-features` does not only respond to the `quickfix` code action. It responds to all code actions that start with `quickfix`, such as `quickfix.biome` and `quickfix.some-name`. Therefore, adding `quickfix.biome` to `editor.codeActionsOnSave` slows down the saving of `*.ts` files.
As an experiment, I commented out [the line that waits for 500ms](https://github.com/microsoft/vscode/blob/684f9d47a4a97d7ea747915f4d1595f86f98aaad/extensions/typescript-language-features/src/languageFeatures/quickFix.ts#L254-L256) and rebuilt vscode, and the saving of `*.ts` files became faster.
https://github.com/user-attachments/assets/a2e727e6-15eb-4701-890b-71b9dd844272
## Questions
I have several questions regarding this delay issue.
### Q1. Is it acceptable to add `quickfix` or `quickfix.biome` to `editor.codeActionsOnSave`?
In the first place, does VS Code intend for `quickfix` or `quickfix.biome` to be added to `editor.codeActionsOnSave`? Neither `quickfix` nor `quickfix.biome` are included among the completion items for `editor.codeActionsOnSave`.
<img width="592" alt="image" src="https://github.com/user-attachments/assets/54430320-098b-48ee-b4eb-9512d3df9d2b">
Is this because adding `quickfix` or `quickfix.biome` to `editor.codeActionsOnSave` is not intended? Should the Biome extension use `source.fixAll.biome` instead of `quickfix.biome`?
### Q2. Is it okey for `typescript-language-features` to react to `quickfix.biome`?
This delay occurs because `typescript-language-features` is reacting to `quickfix.biome`. I feel that `typescript-language-features` is overly sensitive to code actions. Is this the intended behavior?
Should `typescript-language-features` be modified to react only to `quickfix` and `quickfix.ts`?
### Q3. Is it okay that the `source.fixAll.ts` code action does not wait for 500ms?
In the `quickfix` code action, there is a 500ms wait, but in the `source.fixAll.ts` code action, this does not occur.
- https://github.com/microsoft/vscode/blob/684f9d47a4a97d7ea747915f4d1595f86f98aaad/extensions/typescript-language-features/src/languageFeatures/fixAll.ts#L211-L220
Therefore, in the `source.fixAll.ts` code action, there is a possibility that code fixes are performed based on old diagnostics. Is this the intended behavior?
| bug,typescript,editor-code-actions,perf | low | Critical |
2,621,028,315 | pytorch | Operators being traced as method calls in torch.fx | ### 🐛 Describe the bug
I expected binary operators (`+`, `-`, `*`, `/`, etc.) to be traced as a `call_function` of `operator.add` (or `sub`, `mul`, `truediv`, etc.), which is what happens in some cases, e.g. `ParameterTensorAdd` below, but not in other cases, e.g. in `ConstantTensorAdd` it's traced as a `call_method`. Is it expected behaviour for the operator in `ConstantTensorAdd` to become a `call_method`?
```python
import torch
import torch.fx as fx
import torch.nn as nn
class ParameterTensorAdd(nn.Module):
def __init__(self):
super().__init__()
self.constant = nn.Parameter(torch.ones(10))
def forward(self, x):
return self.constant + x
class ConstantTensorAdd(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return torch.ones(10) + x
def print_fx(cls):
model = cls()
traced_model = fx.symbolic_trace(model)
print(cls.__name__)
print(traced_model.graph)
print()
print_fx(ParameterTensorAdd)
print_fx(ConstantTensorAdd)
```
```
ParameterTensorAdd
graph():
%x : [num_users=1] = placeholder[target=x]
%constant : [num_users=1] = get_attr[target=constant]
%add : [num_users=1] = call_function[target=operator.add](args = (%constant, %x), kwargs = {})
return add
ConstantTensorAdd
graph():
%x : [num_users=1] = placeholder[target=x]
%_tensor_constant0 : [num_users=1] = get_attr[target=_tensor_constant0]
%add : [num_users=1] = call_method[target=add](args = (%_tensor_constant0, %x), kwargs = {})
return add
```
### Versions
```
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13700KF
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
CPU max MHz: 5400.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualisation: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 24 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] torch==2.5.1+cpu
[pip3] torchaudio==2.3.0+cpu
[pip3] torchvision==0.18.0+cpu
[conda] No relevant packages
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | triaged,module: fx | low | Critical |
2,621,066,000 | PowerToys | Mouse without borders become crazy when the wifi signal lost and restore. | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
I have 4 machines with same version of powertoys. 2 of them by wire and 2 other by wifi.
With one wired machine being master, y use the other 3 machines as slaves.
When the wifi lost and reconect, the mouse without borders go crazy and if i move my mouse arround screen, the cursor clicks random and make keyboard pulsations. Seems to repeat my last actions but in random places.
I have MWB with service enabled and using it with shortcuts. Not use de move out cursor to change the machine.
### ✔️ Expected Behavior
If i need to reload the service, or reload the config saved of PowerToys or similar action to restore de normal funcion of the cursor and keyboard, it's not problem... but... i don't know.
The only solution is restart the wifi PC to restore normal function.
### ❌ Actual Behavior
The cursor go crazy and do random clicks and keyboard strokes....
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,621,066,425 | go | cmd/go: inconsistent reporting of default Go version | In the transcript below, there is no go line in the go.mod.
Builds say I can't use post-Go 1.16 features, but then when I say
"go get go@1.21.0" (using a sufficiently old version so as not to break any users),
it claims I am downgrading from Go 1.24.
```
% go install
# rsc.io/tmp/jsonfmt
./jsonfmt.go:95:9: predeclared any requires go1.18 or later (-lang was set to go1.16; check go.mod)
% cat go.mod
module rsc.io/tmp/jsonfmt
% go get go@go1.21.0
go: downgraded go 1.24 => 1.21.0
go: added toolchain go1.24
% cat go.mod
module rsc.io/tmp/jsonfmt
go 1.21.0
toolchain go1.24
%
``` | NeedsInvestigation,GoCommand | low | Minor |
2,621,072,470 | godot | Error when setting Texture3D slices to 16x16 | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Laptop GPU (NVIDIA; 32.0.15.6603) - 13th Gen Intel(R) Core(TM) i7-13700H (20 Threads)
### Issue description
When importing a Texture3D resource with the following characteristics:
- 2048x2048 resolution
- channels used: RGBA
- number of slices: 16
Godot reports the following error:
```
Data for slice index 0 (mapped to layer 0) differs in size (supplied: 4793584) than what is required by the format (4793568).
servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:1070 - Condition "texture.rd_texture.is_null()" is true.
Attempting to use an uninitialized RID
servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:1362 - Parameter "tex" is null.
```
Setting the slices to 8x8 does not result in an error. I've tried different combinations of resolutions, and import settings, but only the slices number seem to be the culprit. I've attached the texture file to this report.
[T_Fog_1.zip](https://github.com/user-attachments/files/17556389/T_Fog_1.zip)
I realize that 16x16 slices are over-the-top. Perhaps then the number of slices should be capped to 8?
### Steps to reproduce
- Import the texture with the following settings:

- Use it in a shader (see attached example)
### Minimal reproduction project (MRP)
A test scene is attached.
[godot_bug_noise.zip](https://github.com/user-attachments/files/17556563/godot_bug_noise.zip)
| bug,topic:rendering | low | Critical |
2,621,090,758 | vscode | Search does not find pattern in closed files |
Type: <b>Bug</b>
Not able to find any information when using Search (Quick search included) for not opened files.
Below follows the output from Search Engine Trace logs:
```json
{
"_reason": "searchView",
"folderQueries": [
{
"folder": {
"$mid": 1,
"fsPath": "c:\\**\\**\\**\\**", // Obfuscated
"_sep": 1,
"external": "file:///c%3A/**/**/**/**", // Obfuscated
"path": "/c:/**/**/**/**", // Obfuscated
"scheme": "file"
},
"excludePattern": [
{
"pattern": {
"**/.git": true,
"**/.svn": true,
"**/.hg": true,
"**/CVS": true,
"**/.DS_Store": true,
"**/Thumbs.db": true,
"**/node_modules": true,
"**/bower_components": true,
"**/*.code-search": true
}
}
],
"fileEncoding": "utf8",
"disregardIgnoreFiles": true,
"disregardGlobalIgnoreFiles": true,
"disregardParentIgnoreFiles": true,
"ignoreSymlinks": false
}
],
"usingSearchPaths": false,
"onlyOpenEditors": false,
"maxResults": 20000,
"type": 2,
"contentPattern": {
"pattern": "20241028T190808.594+09:00",
"isRegExp": false,
"isCaseSensitive": false,
"isWordMatch": false,
"notebookInfo": {
"isInNotebookMarkdownInput": true,
"isInNotebookMarkdownPreview": true,
"isInNotebookCellInput": true,
"isInNotebookCellOutput": true
},
"wordSeparators": "`~!@#$%^&*()-=+[{]}\\|;:'\",.<>/?"
},
"previewOptions": { "matchLines": 1, "charsPerLine": 1000 },
"usePCRE2": false
}
```
VS Code version: Code 1.94.2 (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i5-1245U (12 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.69GB (2.84GB free)|
|Process Argv|--crash-reporter-id 9b81992b-5f46-4376-95cb-8371addc56fb|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (15)</summary>
Extension|Author (truncated)|Version
---|---|---
Bookmarks|ale|13.5.0
gitlens|eam|15.6.2
prettier-vscode|esb|11.0.0
gc-excelviewer|Gra|4.2.62
vscode-peacock|joh|4.2.2
rainbow-csv|mec|3.12.0
git-graph|mhu|1.30.0
vscode-docker|ms-|1.29.3
cpptools|ms-|1.22.10
cpptools-extension-pack|ms-|1.3.0
hexeditor|ms-|1.10.0
powershell|ms-|2024.2.2
material-icon-theme|PKi|5.12.0
vscode-xml|red|0.27.1
code-spell-checker|str|3.0.1
(1 theme extensions excluded)
</details>
<!-- generated by issue reporter --> | bug,info-needed,search | low | Critical |
2,621,106,414 | transformers | Please don't kill BetterTransformer — 1.88x faster inference than SDPA | ### Feature request
I would like to request that BetterTransformer not be deprecated. See also [optimum#2083](https://github.com/huggingface/optimum/issues/2083).
This issue is intended to track the lack of feature-parity in Hugging Face `transformers` with BetterTransformer.
### Motivation
This is a simple example that demonstrates just how valuable BetterTransformer is to users of BERT-like models:
```python
```python
import torch
from transformers import RobertaModel, RobertaTokenizerFast
# BEGIN CONFIG #
MODEL_NAME = 'umarbutler/emubert'
EXAMPLE_INPUT = "\
The Parliament shall, subject to this Constitution,\
have power to make laws for the peace, order, and good\
government of the Commonwealth with respect to:\
(i) trade and commerce with other countries, and among\
the States;\
(ii) taxation; but so as not to discriminate between"""
# END CONFIG #
sdpa_model = RobertaModel.from_pretrained(MODEL_NAME, attn_implementation = 'sdpa').to(torch.bfloat16).to('cuda').eval()
bettertransformer_model = RobertaModel.from_pretrained(MODEL_NAME).to(torch.bfloat16).to_bettertransformer().to('cuda').eval()
tokenizer = RobertaTokenizerFast.from_pretrained(MODEL_NAME)
input_tokens = tokenizer(EXAMPLE_INPUT, return_tensors='pt').to('cuda')
with torch.inference_mode():
# Do unbenched forward passes to control for potential caching effects.
for _ in range(10):
bettertransformer_model(**input_tokens)
sdpa_model(**input_tokens)
# Benchmark the models.
%timeit bettertransformer_model(**input_tokens)
%timeit sdpa_model(**input_tokens)
```
On my 4090, BetterTransformer achieves `1.93 ms ± 104 μs` and SDPA achieves `3.64 ms ± 259 μs`. BetterTransformer is almost 2x faster (1.88x)...
I have found both training and inference to be *significantly* faster with BetterTransformer enabled, even compared to SPDA and flash attention 2. I believe this is because of how it fuses layers into a single encoder block. Until/if ever that functionality and its associated performance gains can be incorporated into Hugging Face, I'd be fine with having BetterTransformer deprecated, but until then, BetterTransformer's removal would make my code and the code of other users of BetterTransformer significantly slower.
### Your contribution
This request. | Feature request | low | Major |
2,621,107,256 | transformers | Vision (Auto)Processor multiple images finetuning example. | ### Feature request
Is it possible to upload an example of how to finetune PaLIGemma on multi-image inputs?
Something similar to [multi-image-inference](https://huggingface.co/docs/transformers/main/model_doc/paligemma#multi-image-inference), which shows how to perform multi-image inference over PaLIGemma.
### Motivation
enabling finetuning of multi-image PaLIGemma
### Your contribution
. | Examples,Feature request,Multimodal | low | Minor |
2,621,150,605 | langchain | PydanticUserError for OpenAI LLMs | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = OpenAI(
api_key=settings.openai_api_key,
verbose=True,
temperature=0,
model_name="gpt-4",
)
llm.invoke("....")
### Error Message and Stack Trace (if applicable)
pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
For further information visit https://errors.pydantic.dev/2.9/u/root-validator-pre-skip
### Description
I am really unhappy with this library. Let me put it that way. Using this library is really questioning my potential as a developer!
Whenever I start a new task with this library, I run into some import statement issues, some pydantic errors, and what not!
One day, I need to import PromptTemplate from langchain.prompts, the next day I have to import it from langchain_core.prompts. The documentation is not at all helpful with these changes.
With that Rant, let me explain my issue, and eventually the bot will give the same answer to downgrade my pydantic version. But still I will try to explain
I am trying to use the invoke method in the OpenAI instance, and getting a PydanticUserError. The code and the error message is attached below.
### System Info
alembic==1.13.3
amqp==5.2.0
annotated-types==0.7.0
anyio==4.6.0
astroid==3.3.5
billiard==4.2.1
black==24.10.0
boto3==1.35.37
botocore==1.35.37
celery==5.4.0
certifi==2024.8.30
cfgv==3.4.0
charset-normalizer==3.4.0
click==8.1.7
click-didyoumean==0.3.1
click-plugins==1.1.1
click-repl==0.3.0
colorama==0.4.6
dill==0.3.9
distlib==0.3.9
distro==1.9.0
dnspython==2.7.0
ecdsa==0.19.0
email_validator==2.2.0
fastapi==0.115.0
fastapi-cli==0.0.5
filelock==3.16.1
flake8==7.1.1
flower==2.0.1
greenlet==3.1.1
h11==0.14.0
httpcore==1.0.6
httptools==0.6.1
httpx==0.27.2
humanize==4.11.0
identify==2.6.1
idna==3.10
isort==5.13.2
Jinja2==3.1.4
jiter==0.6.1
jmespath==1.0.1
jose==1.0.0
jsonpatch==1.33
jsonpointer==3.0.0
kombu==5.4.2
langchain==0.0.27
langchain-core==0.3.13
langchain-openai==0.2.2
langsmith==0.1.137
Mako==1.3.5
markdown-it-py==3.0.0
MarkupSafe==3.0.1
mccabe==0.7.0
mdurl==0.1.2
mypy-extensions==1.0.0
nodeenv==1.9.1
numpy==2.1.2
openai==1.51.2
orjson==3.10.10
packaging==24.1
pandas==2.2.3
pathspec==0.12.1
pika==1.3.2
pipenv==2024.1.0
platformdirs==4.3.6
pre_commit==4.0.1
prometheus_client==0.21.0
prompt_toolkit==3.0.48
psycopg2==2.9.9
pyasn1==0.6.1
pycodestyle==2.12.1
pydantic==2.9.2
pydantic-settings==2.6.0
pydantic_core==2.23.4
pyflakes==3.2.0
Pygments==2.18.0
pylint==3.3.1
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-jose==3.3.0
python-multipart==0.0.12
pytz==2024.2
PyYAML==6.0.2
regex==2024.9.11
requests==2.32.3
requests-toolbelt==1.0.0
rich==13.9.2
rsa==4.9
s3transfer==0.10.3
setuptools==75.1.0
shellingham==1.5.4
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.35
sqlmodel==0.0.22
starlette==0.38.6
tenacity==9.0.0
tiktoken==0.8.0
tomlkit==0.13.2
tornado==6.4.1
tqdm==4.66.5
typer==0.12.5
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
uvicorn==0.31.0
vine==5.1.0
virtualenv==20.27.0
watchfiles==0.24.0
wcwidth==0.2.13
websockets==13.1
| Ɑ: core | low | Critical |
2,621,208,563 | pytorch | PCH build fail with sccache-v0.8.2 | ### 🐛 Describe the bug
With a weird (see [here](https://github.com/pytorch/pytorch/actions/runs/11568790721/job/32213593951) )
```
MakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/RuyUtils.cpp.o: file not recognized
```
When given file is generated by
```
/opt/cache/bin/c++ -DAT_PER_OPERATOR_HEADERS -DBUILD_ONEDNN_GRAPH -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_ENABLE_LLVM -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/var/lib/jenkins/workspace/build/aten/src -I/var/lib/jenkins/workspace/aten/src -I/var/lib/jenkins/workspace/build -I/var/lib/jenkins/workspace -I/var/lib/jenkins/workspace/cmake/../third_party/benchmark/include -I/opt/llvm/include -I/var/lib/jenkins/workspace/third_party/onnx -I/var/lib/jenkins/workspace/build/third_party/onnx -I/var/lib/jenkins/workspace/nlohmann -I/var/lib/jenkins/workspace/torch/csrc/api -I/var/lib/jenkins/workspace/torch/csrc/api/include -I/var/lib/jenkins/workspace/caffe2/aten/src/TH -I/var/lib/jenkins/workspace/build/caffe2/aten/src/TH -I/var/lib/jenkins/workspace/build/caffe2/aten/src -I/var/lib/jenkins/workspace/build/caffe2/../aten/src -I/var/lib/jenkins/workspace/torch/csrc -I/var/lib/jenkins/workspace/third_party/miniz-2.1.0 -I/var/lib/jenkins/workspace/third_party/kineto/libkineto/include -I/var/lib/jenkins/workspace/third_party/kineto/libkineto/src -I/var/lib/jenkins/workspace/third_party/cpp-httplib -I/var/lib/jenkins/workspace/aten/src/ATen/.. -I/var/lib/jenkins/workspace/third_party/FXdiv/include -I/var/lib/jenkins/workspace/c10/.. -I/var/lib/jenkins/workspace/third_party/pthreadpool/include -I/var/lib/jenkins/workspace/third_party/cpuinfo/include -I/var/lib/jenkins/workspace/aten/src/ATen/native/quantized/cpu/qnnpack/include -I/var/lib/jenkins/workspace/aten/src/ATen/native/quantized/cpu/qnnpack/src -I/var/lib/jenkins/workspace/aten/src/ATen/native/quantized/cpu/qnnpack/deps/clog/include -I/var/lib/jenkins/workspace/third_party/NNPACK/include -I/var/lib/jenkins/workspace/third_party/fbgemm/include -I/var/lib/jenkins/workspace/third_party/fbgemm -I/var/lib/jenkins/workspace/third_party/fbgemm/third_party/asmjit/src -I/var/lib/jenkins/workspace/third_party/ittapi/src/ittnotify -I/var/lib/jenkins/workspace/third_party/FP16/include -I/var/lib/jenkins/workspace/third_party/tensorpipe -I/var/lib/jenkins/workspace/build/third_party/tensorpipe -I/var/lib/jenkins/workspace/third_party/tensorpipe/third_party/libnop/include -I/var/lib/jenkins/workspace/third_party/fmt/include -I/var/lib/jenkins/workspace/build/third_party/ideep/mkl-dnn/include -I/var/lib/jenkins/workspace/third_party/ideep/mkl-dnn/src/../include -I/var/lib/jenkins/workspace/third_party/flatbuffers/include -isystem /var/lib/jenkins/workspace/build/third_party/gloo -isystem /var/lib/jenkins/workspace/cmake/../third_party/gloo -isystem /var/lib/jenkins/workspace/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /var/lib/jenkins/workspace/cmake/../third_party/googletest/googlemock/include -isystem /var/lib/jenkins/workspace/cmake/../third_party/googletest/googletest/include -isystem /var/lib/jenkins/workspace/third_party/protobuf/src -isystem /opt/conda/envs/py_3.9/include -isystem /var/lib/jenkins/workspace/third_party/XNNPACK/include -isystem /var/lib/jenkins/workspace/third_party/ittapi/include -isystem /var/lib/jenkins/workspace/cmake/../third_party/eigen -isystem /var/lib/jenkins/workspace/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /var/lib/jenkins/workspace/third_party/ideep/include -isystem /var/lib/jenkins/workspace/INTERFACE -isystem /var/lib/jenkins/workspace/third_party/nlohmann/include -isystem /var/lib/jenkins/workspace/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Werror -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -DMKL_HAS_SBGEMM -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-but-set-variable -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -DASMJIT_STATIC -fopenmp -Winvalid-pch -include /var/lib/jenkins/workspace/build/caffe2/CMakeFiles/torch_cpu.dir/cmake_pch.hxx -o CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/RuyUtils.cpp.o -c /var/lib/jenkins/workspace/aten/src/ATen/native/quantized/cpu/RuyUtils.cpp
```
And results are
```
caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/RuyUtils.cpp.o: GCC precompiled header (version 014) for C++
```
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | module: ci,triaged,module: third_party | low | Critical |
2,621,213,185 | next.js | "use cache" with vercel otel - used `Math.random()` outside of `"use cache"` | ### Link to the code that reproduces this issue
https://github.com/JamesRobertWiseman/nextjs-otel-use-cache
### To Reproduce
1. Start the application in development (next dev)
2. Visit the home page of the application
3. Error will be present.
Error:
<img width="972" alt="Screenshot 2024-10-29 at 14 54 02" src="https://github.com/user-attachments/assets/ff82b450-e0bc-4528-9a66-349aa8f7957f">
```log
[ Server ] Error: Route "/" used `Math.random()` outside of `"use cache"` and without explicitly calling `await connection()` beforehand. See more info here: https://nextjs.org/docs/messages/next-prerender-random
------------------
resolveErrorDev
./node_modules/.pnpm/next@15.0.2-canary.10_@opentelemetry+api@1.9.0_react-dom@19.0.0-rc-02c0e824-20241028_react@19_5el7z43pxvlkdtwozw72p3sfgu/node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js
getOutlinedModel
./node_modules/.pnpm/next@15.0.2-canary.10_@opentelemetry+api@1.9.0_react-dom@19.0.0-rc-02c0e824-20241028_react@19_5el7z43pxvlkdtwozw72p3sfgu/node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js
parseModelString
./node_modules/.pnpm/next@15.0.2-canary.10_@opentelemetry+api@1.9.0_react-dom@19.0.0-rc-02c0e824-20241028_react@19_5el7z43pxvlkdtwozw72p3sfgu/node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js
Array.eval
./node_modules/.pnpm/next@15.0.2-canary.10_@opentelemetry+api@1.9.0_react-dom@19.0.0-rc-02c0e824-20241028_react@19_5el7z43pxvlkdtwozw72p3sfgu/node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js
------------------
resolveConsoleEntry
./node_modules/.pnpm/next@15.0.2-canary.10_@opentelemetry+api@1.9.0_react-dom@19.0.0-rc-02c0e824-20241028_react@19_5el7z43pxvlkdtwozw72p3sfgu/node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js
processFullStringRow
./node_modules/.pnpm/next@15.0.2-canary.10_@opentelemetry+api@1.9.0_react-dom@19.0.0-rc-02c0e824-20241028_react@19_5el7z43pxvlkdtwozw72p3sfgu/node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js
processFullBinaryRow
./node_modules/.pnpm/next@15.0.2-canary.10_@opentelemetry+api@1.9.0_react-dom@19.0.0-rc-02c0e824-20241028_react@19_5el7z43pxvlkdtwozw72p3sfgu/node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js
progress
./node_modules/.pnpm/next@15.0.2-canary.10_@opentelemetry+api@1.9.0_react-dom@19.0.0-rc-02c0e824-20241028_react@19_5el7z43pxvlkdtwozw72p3sfgu/node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js
```
### Current vs. Expected behavior
### Current:
Using `dynamicIO` and `@vercel/otel` results in `math.random` outside of `"use cache";` error.
### Expected:
No error should appear, compiler should ignore any math.random calls in instrumentation.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #26~22.04.1-Ubuntu SMP Thu Jul 11 22:33:04 UTC 2024
Available memory (MB): 7930
Available CPU cores: 2
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: 1.22.22
pnpm: 9.11.0
Relevant Packages:
next: 15.0.2-canary.10 // Latest available version is detected (15.0.2-canary.10).
eslint-config-next: N/A
react: 19.0.0-rc-02c0e824-20241028
react-dom: 19.0.0-rc-02c0e824-20241028
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Instrumentation, Performance
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Performance,Instrumentation | low | Critical |
2,621,259,505 | PowerToys | remap shortcut not working | ### Microsoft PowerToys version
0.85.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
remap a key or shortcuts still not working
### ✔️ Expected Behavior
fix this bug
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Critical |
2,621,292,809 | vscode | Hovering warning icon in terminal tab is difficult to action | Repro:
1. Build and debug terminal-sample
2. Run Terminal API: update or clear environment commands, a ⚠ should show
3. Hover it, try move the mouse into the hover

A solution after https://github.com/microsoft/vscode/pull/232435 would be for the warning icon to have its own hover | bug,workbench-hover,terminal-tabs | low | Critical |
2,621,364,200 | pytorch | When I use register_full_backward_hook and register_forward_hook in nn.MultiheadAttention, The MultiheadAttention executed wrong conditional branch and get wrong backward grads. | ### 🐛 Describe the bug
self.attn = nn.MultiheadAttention(embed_dim=config.n_embd,num_heads=config.n_head,dropout=config.attention_dropout,batch_first=True)
self.attn.name="attn"
self.attn.opname="mha"
# self.attn.out_proj.weight
self.attn.register_full_backward_hook(hook_backward_function)
self.attn.register_forward_hook(hook_forward_function)# make attn goes to wrong grads,go to not the self attention branch
x,_=self.attn(x,x,x,attn_mask=self.attention_mask,is_causal=True)
#hook_forward_function gets info but not modifies data.
### Versions
PyTorch version: 2.4.0a0+gitee1b680
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 36
On-line CPU(s) list: 0-35
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-9980XE CPU @ 3.00GHz
Stepping: 4
CPU MHz: 1200.079
CPU max MHz: 4500.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 576 KiB
L1i cache: 576 KiB
L2 cache: 18 MiB
L3 cache: 24.8 MiB
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] optree==0.13.0
[pip3] torch==2.4.0a0+gitee1b680
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2024.2.2 pypi_0 pypi
[conda] mkl-static 2024.2.2 pypi_0 pypi
[conda] numpy 2.1.2 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.4.0a0+gitee1b680 dev_0 <develop>
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @bhosmer @cpuhrsch @erichan1 @drisspg | module: nn,triaged | low | Critical |
2,621,377,581 | next.js | Emotion compiler not properly working when using "paths" in TypeScript since 13.5.x | ### Link to the code that reproduces this issue
https://github.com/Itrulia/material-ui-next-nx
### To Reproduce
1. npm install && npx nx dev emotion-test
2. Open `http://localhost:3000`
3. Inspect the HTML output and see `.mui-fh6enu{color:pink;}.mui-fh6enu .undefined{color:#f00!important;}.mui-bdz642{color:pink;}`
4. Change next to `~13.4.19` and it works.
5. OR move
```
${StyledPageWebsite} {
color: #f00 !important;
}
```
to the app (for example index.ts) and it works again.
### Current vs. Expected behavior
The emotion plugin does not seem to be applied to a library that is referenced via the tsconfig paths. Component selectors by emotion thus don't work properly.
Expected (v13.4.x) vs Now (15.0.0)


### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:20 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 22.8.0
npm: 10.8.2
Yarn: 1.22.19
pnpm: N/A
Relevant Packages:
next: 15.0.1 // Latest available version is detected (15.0.1).
eslint-config-next: 14.2.3
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
SWC
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
When changing next to `~13.4.19` it works again, changing it to `~13.5.0` it stopps working.
At first I thought it was this change: https://github.com/vercel/next.js/commit/311eea4c6ab847548d81421a8edbf86ac77341f9
But this is actually setting it to `@emotion/react`:
```
console.error(isServerLayer, (jsConfig == null ? void 0 : (_jsConfig_compilerOptions4 = jsConfig.compilerOptions) == null ? void 0 : _jsConfig_compilerOptions4.jsxImportSource) ?? ((compilerOptions == null ? void 0 : compilerOptions.emotion) && !isServerLayer ? "@emotion/react" : "react"))
> false @emotion/react
``` | bug,SWC | low | Critical |
2,621,447,752 | ui | [bug]: onScroll prop should be passed to ScrollAreaPrimitive.Viewport by default | ### Describe the bug
All remaining props are passed from ScrollArea to ScrollAreaPrimitive.Root, however, onScroll should go on ScrollAreaPrimitive.Viewport.
Ideally, it should be passed to the Viewport by default to save people time figuring out why the onScroll prop is accepted on ScrollArea but has no effect.
For anyone running into this issue, it can be addressed by passing onScroll to ScrollAreaPrimitive.Viewport:
``` typescript
const ScrollArea = React.forwardRef<
React.ElementRef<typeof ScrollAreaPrimitive.Root>,
React.ComponentPropsWithoutRef<typeof ScrollAreaPrimitive.Root>
>(({ className, children, onScroll, ...props }, ref) => (
<ScrollAreaPrimitive.Root
ref={ref}
className={cn("relative overflow-hidden", className)}
{...props}
>
<ScrollAreaPrimitive.Viewport className="h-full w-full rounded-[inherit]" onScroll={onScroll}>
{children}
</ScrollAreaPrimitive.Viewport>
<ScrollBar />
<ScrollAreaPrimitive.Corner />
</ScrollAreaPrimitive.Root>
))
```
Related issues: #1005 #1430
### Affected component/components
ScrollArea
### How to reproduce
1. Install the ScrollArea component
2. Use it on another component, passing an onScroll handler function
3. Scrolling will not trigger the handler. The only way to get it to trigger is modifying the ScrollArea component and passing the onScroll handler to the ScrollAreaPrimitive.Viewport component.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
shadcn@0.9.2
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,621,452,786 | godot | ClassDB::_instantiate_internal fails at runtime if GDExtension node be used | ### Tested versions
4.3.dev 4.3.stable 4.4 beta
### System information
Windows 11 - Godot 4.3 - Forward+
### Issue description
Issue is caused by this [line of code](https://github.com/godotengine/godot/blob/08f9cba0fbf27f171dea55de6f8274928b9f0d84/core/object/class_db.cpp#L556).
All classes registered from GDExtension are marked as API_EDITOR and API_EDITOR_EXTENSION which will cause issues if you use packed scenes and instantiation at runtime.
### Steps to reproduce
When we create a node with GDExtension and register it let's say `BoxNode3D`, If it exist in a scene and we pack the scene and instantiate it using `packed_scene->instantiate()` it leads to the
```
Class 'BoxNode3D' can only be instantiated by editor.
```
Mean while we designed this node to work inside game as well, This is a very serious issue.
Steps to Reproduce :
1. Create a GDExtension and create a simple class driven from Node3D
2. Register class using `ClassDB::register_class`
3. Add some instance of your node to scene.
4. Run a script, Get scene root and pack it to a packed scene.
5. Save it using `ResourceSaver`
6. Load it using `ResourceLoader` as a `PackedScene`
7. Instantiate packed scene.
8. You'll get the error that BoxNode3D can only be instantiated by editor
I did a test and commented check for `API_EDITOR_EXTENSION ` and it worked perfectly fine.
I think best we can do is to rename `API_EDITOR_EXTENSION ` to `API_EXTENSION` and skip the checks for extensions at runtime or extensions classes can be defined as EDITOR_ONLY, CORE_ONLY and EDITOR_AND_CORE but it requires more work and get stuff more complicated.
### Minimal reproduction project (MRP)
- No Project Needed - | needs testing,topic:gdextension | low | Critical |
2,621,493,293 | godot | Accessing untyped addon scripts cause "method not present on the inferred type "Variant" with "Unsafe Methos Access" set to Error | ### Tested versions
4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Laptop GPU (NVIDIA; 31.0.15.4683) - AMD Ryzen 7 7840HS w/ Radeon 780M Graphics (16 Threads)
### Issue description

Accessing untyped addon scripts cause "method not present on the inferred type "Variant" with "Unsafe Method Access" set to 'Error'
This happens even though "Exclude Addons" is enabled.
We've run into this problem in our project with addons such as GUT, where we've had to either lower the warnings or try to fix the addons by making them typed.
### Steps to reproduce
Open the attached project and note the error. Note if you disable the strict "Unsafe Method Access" setting the code will run correctly. Also note I didn't actually make an addon, I just put the code in an "addons" folder as all the "Exclude Addons" setting does is exclude files in that path from checks.
### Minimal reproduction project (MRP)
[type-checking-issues-with-addons.zip](https://github.com/user-attachments/files/17558650/type-checking-issues-with-addons.zip)
| discussion,topic:gdscript,documentation | low | Critical |
2,621,523,410 | tauri | [feat] Supports multiple webviews on mobile platforms | ### Describe the problem
My mobile application needs to navigate to a third-party page (for authorizing or information reference), and I need a way to add a "return button" for user when he wants to abort that process (and return to the application page).
Iframes doesn't work since some third-party sites use `X-FRAME-OPTIONS` or CSP header to prevent itself from loading in iframes.
With multiwebview, I can create a fixed navbar on top of the screen to gain more control over the navigation.
However, this is not feasible on mobile platforms since multiwebview are *available on desktop and crate feature unstable only*.
### Describe the solution you'd like
I would like to have multiwebview feature available on mobile platforms.
### Alternatives considered
Listen to `onBackPressed` on Android (snippet from https://github.com/tauri-apps/tauri/issues/8142#issuecomment-2418365660 ), but it's not applicable to iOS targets.
### Additional context
_No response_ | type: feature request | low | Minor |
2,621,530,532 | material-ui | [Autocomplete][JoyUI] Autocomplete bug with searching title but displaying value | ### Possible related issue
https://github.com/mui/material-ui/issues/38360
### Search keywords
autocomplete
### Latest version
- [X] I have tested the latest version
### Steps to reproduce
Link to live example: (required)
https://codesandbox.io/p/sandbox/c8v7xf
Steps:
1. Create list of options using format
countryCode = [
{ name: 'Afghanistan', code: 'AF', phoneCode: '+93' },
{ name: 'Albania', code: 'AL', phoneCode: '+355' },
{ name: 'Algeria', code: 'DZ', phoneCode: '+213' },
....
....
2. Create autocomplete and set attribute getOptionLabel={(option) => option.phoneCode}
3. Type in a country name e.g "France" , the options given show a bunch of countries + the filtered search country (e.g France) at the end of the list
I have noted there might be a related issue where there might be wrong search results when there are lots of data - according to issue https://github.com/mui/material-ui/issues/38360
### Current behavior
Autocomplete currently cannot accurately search from list of items when I want to show value selected , but search by label/title/name.
Im using autocomplete for searching for country code , user should be able to search by typing country name , but when selected , the country phone code will be displayed.
If i change the getOptionLabel
from
getOptionLabel={(option) => option.phoneCode}
to
getOptionLabel={(option) => option.name}//name is country name / label/title , autocomplete will work ,but I cannot achieve what I want to achieve with autocomplete
### Expected behavior
Autocomplete can correctly search from list of items and when selected , the value ( which is different from the label/title/name) will be shown in the option label instead
### Context
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
Tested on Chrome Version 129.0.6668.101 (Official Build) (64-bit)
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
System:
OS: Windows 11 10.0.22631
Binaries:
Node: 20.11.1 - C:\Program Files\nodejs\node.EXE
npm: 10.2.4 - C:\Program Files\nodejs\npm.CMD
pnpm: Not Found
Browsers:
Chrome: Not Found
Edge: Chromium (128.0.2739.79)
npmPackages:
@emotion/react: ^11.11.4 => 11.11.4
@emotion/styled: ^11.11.5 => 11.11.5
@mui/base: 5.0.0-beta.42
@mui/core-downloads-tracker: 6.0.0-dev.240424162023-9968b4889d
@mui/icons-material: ^5.15.20 => 5.15.20
@mui/joy: ^5.0.0-beta.36 => 5.0.0-beta.36
@mui/material: ^5.15.20 => 5.15.20
@mui/private-theming: 6.0.0-dev.20240529-082515-213b5e33ab
@mui/styled-engine: 6.0.0-dev.20240529-082515-213b5e33ab
@mui/system: 6.0.0-dev.240424162023-9968b4889d
@mui/types: 7.2.14
@mui/utils: 6.0.0-dev.20240529-082515-213b5e33ab
@mui/x-date-pickers: ^7.9.0 => 7.9.0
@types/react: ^18.3.3 => 18.3.3
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
typescript: ^5.4.3 => 5.4.5
```
</details>
**Search keywords**: autocomplete | on hold,package: joy-ui | low | Critical |
2,621,598,134 | godot | Godot Editor does not automatically connect to VCS | ### Tested versions
Reproducible in v4.4.dev3.official [f4af8201b]
Reproducible in v4.4.dev3.mono.official [f4af8201b]
Not reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.4.dev3.mono - Ubuntu 24.04.1 LTS 24.04 on Wayland - X11 display driver, Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Mesa Intel(R) UHD Graphics 630 (CFL GT2) - Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz (16 threads)
### Issue description
Godot Git Plugin 3.1.1 https://github.com/godotengine/godot-git-plugin/commit/f6002fab42f904e6799c1574d04b7a9f825db49c
When I launch the Godot Editor, the Godot Git Plugin is not connected (Commit tab not present in right-hand-side dock), and I have to connect it with:
```
Project / Version Control / Version Control Settings... / Connect to VCS / On
```
Then the plugin works normally.
This is the content of the Output pane when the editor is launched:
```
Godot Engine v4.4.dev3.mono.official (c) 2007-present Juan Linietsky, Ariel Manzur & Godot Contributors.
--- Debug adapter server started on port 6006 ---
--- GDScript language server started on port 6005 ---
Cannot get class 'GitPlugin'.
Received a nullptr VCS extension instance during construction.
export_plugin.gd/_enter_tree(), version='2024-10-29 10:51:10 -0400'
```
The project uses only GDScript. I'm using the mono edition only because I wanted to experiment with using C# in Godot. I could try the regular non-mono edition if you think that would be helpful in isolating the issue.
### Steps to reproduce
(See above)
### Minimal reproduction project (MRP)
I will attempt to provide an MRP if others cannot reproduce this issue. | topic:editor,topic:plugin,needs testing | low | Critical |
2,621,598,858 | godot | Setter not called during duplicate for an exported property referencing an instance of a custom class | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 11 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 Ti (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-9700F CPU @ 3.00GHz (8 Threads)
### Issue description
If a node includes an exported property that references a child defined as a custom class, the setter is called as expected when the node is created (first time, or when reading a scene). However, when duplicating a node, either through the editor gui, or in gdscript, the setter is not called (for such a child defined as a custom class).
### Steps to reproduce
### Create a custom class,CustomClass, by extending a built-in class, for example:
```
@tool
extends Path3D
class_name CustomClass
```
### Create a second script to apply to a Path3D node
```
@tool
extends Path3D
```
### Create a 3D scene, add a Node3D.
Add a CustomClass as child node, also add a further child node based on the built-in class (Path3D in this case) and attach the Path3D script. Add the following script to the Node3D to display values of properties:
```
@tool
extends Node3D
@export var path3d:Path3D
@export var custom_class:CustomClass
@export var path3d_set:Path3D:
set(value):
path3d_set = value
print( "Set path3d_set ",path3d_set,"\n")
@export var custom_class_set:CustomClass:
set(value):
custom_class_set = value
print( "Set custom_class_set ",custom_class_set,"\n")
func _init() -> void:
print( "Init .............","\n",
"Path3D ",path3d,"\n",
"CustomClass ",custom_class,"\n",
"Path3D_set ",path3d_set,"\n",
"CustomClass_set ",custom_class_set,"\n")
func _ready() -> void:
print( "Ready .............","\n",
"Path3D ",path3d,"\n",
"CustomClass ",custom_class,"\n",
"Path3D_set ",path3d_set,"\n",
"CustomClass_set ",custom_class_set,"\n")
```
### Assign the CustomClass and Path3D nodes to the respective properties in the editor.
Then reload the scene, this gives the following output, note that the exported property, <custom_class> is defined as expected.
> Init .............
> Path3D \<null\>
> CustomClass \<null\>
> Path3D_set \<null\>
> CustomClass_set \<null\>
>
> Set path3d_set Path3D:<Path3D#2953125619440>
>
> Set custom_class_set CustomClass:<Path3D#2953142401043>
>
> Ready .............
> Path3D Path3D:<Path3D#2953125619440>
> CustomClass CustomClass:<Path3D#2953142401043>
> Path3D_set Path3D:<Path3D#2953125619440>
> CustomClass_set CustomClass:<Path3D#2953142401043>
### Duplicate the above node
Either in the GUI, or by attaching the script below to the root node of the scene, saving and running the project
```
@tool
extends Node3D
func _ready() -> void:
var new_node = $Node3D.duplicate()
add_child(new_node)
new_node.set_owner(get_tree().get_edited_scene_root())
```
### The output will be as below
> Init .............
> Path3D \<null\>
> CustomClass \<null\>
> Path3D_set \<null\>
> CustomClass_set \<null\>
>
> Set path3d_set Path3D:<Path3D#2953192733135>
>
> Ready .............
> Path3D Path3D:<Path3D#2953192733135>
> CustomClass \<null\>
> Path3D_set Path3D:<Path3D#2953192733135>
> CustomClass_set \<null\>
### As is demonstrated, the CustomClass setter is not called during the duplicate process
The value of the property remains <null> even though defined in the original node, yet the exported property referencing a built-in class is duplicated correctly
### Minimal reproduction project (MRP)
[CustomClassExportDuplicate.zip](https://github.com/user-attachments/files/17559367/CustomClassExportDuplicate.zip)
| discussion,topic:core | low | Minor |
2,621,641,614 | PowerToys | When you redefine the color picker shortcut (more than just color picker), the changed shortcut is not displayed in the dashboard | ### Microsoft PowerToys version
0.85.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
ColorPicker
### Steps to reproduce

<!-- Uploading "PowerToysReport_2024-10-29-23-37-16.zip"... -->
### ✔️ Expected Behavior
希望能麻烦您进行优化一下 谢谢
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Help Wanted,Product-Settings,Cost-Small,Status-Reproducible | low | Minor |
2,621,757,331 | flutter | [Impeller] libImpeller: Provide guidance on text decorations. | Underlines, strikethroughs, and such.
cc @oreflow | P3,e: impeller,team-engine,triaged-engine,e: libimpeller | low | Minor |
2,621,766,886 | Python | Add ART1 algorithm in neural network | ### Feature description
### Description:
This issue aims to implement an Adaptive Resonance Theory (ART1) algorithm for binary data clustering. ART1 is well-suited for unsupervised learning with binary inputs, using a vigilance parameter to determine the clustering threshold. This model will enhance the framework's clustering capabilities, allowing for pattern recognition in binary data.
### Goals:
<li>Implement an ART1 class with training and prediction functionalities.
<li>Allow the user to specify the vigilance parameter to control cluster formation.
<li>Include a similarity calculation and weight adjustment function.
### Requirements:
<li>Initialization: ART1 class should initialize with num_features and vigilance.
<li>Training: Implement a train(data) method to cluster binary data based on the vigilance parameter.
<li>Prediction: Add a predict(x) method to classify new input into an existing cluster or mark it as a new cluster if it does not match any.
<li>Documentation: Include docstrings and usage examples to clarify the purpose of each method.
<li>Testing: Provide example usage for verification and add unit tests to confirm functionality. | enhancement | medium | Minor |
2,621,767,104 | pytorch | find_spec does something weird to Python Path when loading modules | ### 🐛 Describe the bug
I tried adding a top level logging import to torch._dynamo module. When I did this, and then I attempt to TORCH_LOGS=dynamo, I end upw ith this error:
```
(/home/ezyang/local/b/pytorch-env) [ezyang@devgpu005.nha1 ~/local/b/pytorch (9b84074e)]$ TORCH_LOGS=dynamo,dynamic,+torch._dynamo.pgo python test/dynamo/test_pgo.py
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/dynamo/test_pgo.py", line 5, in <module>
import torch.compiler.config
File "/data/users/ezyang/b/pytorch/torch/__init__.py", line 2666, in <module>
_logging._init_logs()
File "/data/users/ezyang/b/pytorch/torch/_logging/_internal.py", line 927, in _init_logs
_update_log_state_from_env()
File "/data/users/ezyang/b/pytorch/torch/_logging/_internal.py", line 737, in _update_log_state_from_env
log_state = _parse_log_settings(log_setting)
File "/data/users/ezyang/b/pytorch/torch/_logging/_internal.py", line 716, in _parse_log_settings
elif _is_valid_module(name):
File "/data/users/ezyang/b/pytorch/torch/_logging/_internal.py", line 729, in _is_valid_module
spec = importlib.util.find_spec(qname)
File "/home/ezyang/local/b/pytorch-env/lib/python3.10/importlib/util.py", line 94, in find_spec
parent = __import__(parent_name, fromlist=['__path__'])
File "/data/users/ezyang/b/pytorch/torch/_dynamo/__init__.py", line 45, in <module>
log = logging.getLogger(__name__)
AttributeError: module 'torch._dynamo.logging' has no attribute 'getLogger'. Did you mean: 'get_loggers'?
```
Somehow, the logging import got pointed to torch._dynamo.logging instead of top level logging module.
### Versions
main
cc @albanD | triaged,module: python frontend | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.