id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,571,990,123 | PowerToys | Peek: Expand Navigation Keyboard Shortcuts | ### Description of the new feature / enhancement
Current shortcuts include the keys Up, Down, Left and Right, and they are used interchangeably for navigating through files and file content.
Adding support for CTRL + arrow keys would double the total shortcuts and make navigation easier and more functional.
For instance, single arrow keys could be used strictly to control file navigation, eg, going from a file to another, while CTRL+arrow keys could be used to control navigation through the file content.
A switch could be added to the Peek settings to let the user switch behaviors, making single arrows control content navigation, for example.
### Scenario when this would be used?
Files such as PDFs, text files and videos require the user to be able to browse through the content during preview, which is already possible, but introduces additional steps to mitigate the absense of exclusive shortcuts. As of now, to switch between file navigation and content navigation, it is required to use the Tab key to switch keyboard focus across the Peek UI. Far from optimal, but possible.
With a wider set of shortcuts, it would be possible to control file navigation using single arrow keys, while using CTRL+arrow keys would control the file content, such as scrolling through a TXT or PDF file, or controling playback for video files.
For instance, CTRL+Right or CTRL+Down could be used to jump forward (skip N seconds) or scroll down on a text file, while CTRL+Left and CTRL+Up could be used to jump backwards or scroll up.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,571,990,923 | create-react-app | create-expo-app fails | I get this error when trying to create a new expo-app
5183 verbose pkgid babel-plugin-react-compiler@0.0.0-experimental-7779988-20241007
5184 error code 1
5185 error path C:\Users\isaac\Documents\projects\Payjon\node_modules\babel-plugin-react-compiler
5186 error command failed
5187 error command C:\windows\system32\cmd.exe /d /s /c ./scripts/link-react-compiler-runtime.sh
5188 error '.' nรฏยฟยฝo รฏยฟยฝ reconhecido como um comando interno
5188 error ou externo, um programa operรฏยฟยฝvel ou um arquivo em lotes.
5189 silly unfinished npm timer reify 1728359810537
5190 silly unfinished npm timer reify:build 1728359826985
5191 silly unfinished npm timer build 1728359826987
5192 silly unfinished npm timer build:deps 1728359826987
5193 silly unfinished npm timer build:run:postinstall 1728359827074
5194 silly unfinished npm timer build:run:postinstall:node_modules/babel-plugin-react-compiler 1728359827074
5195 verbose cwd C:\Users\isaac\Documents\projects\Payjon
5196 verbose os Windows_NT 10.0.22631
5197 verbose node v20.16.0
5198 verbose npm v10.9.0
5199 verbose exit 1
5200 verbose code 1
5201 error A complete log of this run can be found in: C:\Users\isaac\AppData\Local\npm-cache\_logs\2024-10-08T03_56_50_130Z-debug-0.log | needs triage | low | Critical |
2,572,026,800 | vscode | Help -> Report Issue -> Create on GitHub silently fails | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
#### VS Code Version:
Version: 1.94.0
Commit: d78a74bcdfad14d5d3b1b782f87255d802b57511
Date: 2024-10-02T13:08:12.626Z
Browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
AppleWebKit/537.36 (KHTML, like Gecko) Code/1.94.0
Chrome/124.0.6367.243 Electron/30.5.1 Safari/537.36
#### OS Version:
15.0.1 (24A348)
#### Steps to Reproduce:
1. Open the Report Issue window via `Help` -> `Report Issue`
2. Fill in the fields
3. Press `Create on GitHub`
Nothing happens. Nada. Zilch.
#### **Log (best I could do with `log show`):**
[vscode-log.txt](https://github.com/user-attachments/files/17287841/vscode-log.txt)
#### **Screen recording:**
https://github.com/user-attachments/assets/2086de0f-d120-493b-ae84-13e52efe526c
| bug,issue-reporter | low | Critical |
2,572,040,467 | TypeScript | Huge jump in tsc memory usage from 5.5.4 -> 5.6.2 | ### ๐ Search Terms
"transformTime time", "compile time"
### ๐ Version & Regression Information
- This changed between versions 5.5.4 and 5.6.2
### โฏ Playground Link
_No response_
### ๐ป Code
`tsc` on 5.6.2 is taking up significantly more memory for unknown reasons. No other dep changes other than typescript upgrade. I've looked into various articles / guides to try and track down how to further investigate this, but thus far I have not found a way to really track what is newly taking up so much more memory.
### ๐ Actual behavior
After upgrading to 5.6, the memory usage of my `tsc` is more than double with no other changes to my `.tsconfig.json` file or the underlying code

### ๐ Expected behavior
Before upgrade to 5.6, when running `tsc` on my project I get the following diagnostics.

### Additional information about the issue
_No response_ | Needs More Info | medium | Major |
2,572,043,337 | vscode | Selection menu is permanently displayed | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
#### VS Code Version:
Version: 1.94.0
Commit: d78a74bcdfad14d5d3b1b782f87255d802b57511
Date: 2024-10-02T13:08:12.626Z
Browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
AppleWebKit/537.36 (KHTML, like Gecko) Code/1.94.0
Chrome/124.0.6367.243 Electron/30.5.1 Safari/537.36
#### OS Version:
15.0.1 (24A348)
I am not sure if this is a VS Code bug or if it belongs to macOS, but I managed somehow to get the Selection menu drop-down to be *permanently* displayed (not entirely sure how, but I _think_ I was holding the mouse down while the File -> Open Recent menu was open, and then I tried to click an item but missed and hit an area outside any actual menu item).
The menu remains displayed in defiance of:
- Escape
- Hitting Selection top level menu several times to toggle it on/off
- Right-clicking VS Code icon in dock and do "Show all windows"
- Clicking any of the items in the menu drop-down
- Opening/closing other top-level menus
- Cycling through windows with Cmd+Tab
Whenever VS Code is the foreground app, that menu is visible. I am going to do a sysdiagnose just in case. Please send me a request for specific files from the sysdiagnose if any of them will help you. I will keep it around for a few weeks.
#### Spindump:
[vscode-spindump.txt](https://github.com/user-attachments/files/17287936/vscode-spindump.txt)
#### Screen recording:
https://github.com/user-attachments/assets/696dd342-11a2-473b-ba2d-f7d5974f7f2f
| bug,upstream,macos,menus | low | Critical |
2,572,044,704 | transformers | Add Loss Functions for QFormer Training in BLIP-2 Model (ITC, ITM, and ITG) | ### Feature request
I propose adding a loss calculation for QFormer training in the BLIP-2 model. Implementing this feature would allow fine-tuning the QFormer and language models for image-text retrieval and captioning tasks, which is crucial for practical applications.
### Motivation
I want to train the BLIP-2 model using the transformers library. In particular, loss functions for Image-Text Contrastive (ITC), Image-Text Matching (ITM), and Image-grounded Text Generation(ITG) are not included, which requires users to manually implement the loss functions.
### Your contribution
I would like to contribute to this open-source project by implementing the loss functions. | Feature request,Multimodal | low | Minor |
2,572,071,224 | stable-diffusion-webui | [Feature Request]: Add Custom Notifications for All Tabs (Not Just Text2Img) | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
It would be helpful to have customizable notification sounds across all tabs in the WebUI, not just for Text2Img. This would allow users to set different sounds for processes like img2img, inpainting, or LoRA training, enhancing workflow by making it easier to identify when a task is done, even if they are multitasking or working in other tabs. This builds on the existing notification feature but adds more flexibility and customization.
### Proposed workflow
### How to Access and Use Customizable Notification Sounds Feature:
1. **Settings Menu:**
- Navigate to the **Settings** tab in the WebUI.
- Find a new section labeled **Notifications**.
2. **Enable Custom Sounds:**
- Toggle **Enable Custom Notification Sounds** to activate custom sounds for all tabs.
3. **Select Sounds for Each Tab:**
- Assign different sounds for **txt2img**, **img2img**, **inpainting**, **LoRA training**, etc.
- Choose or upload an audio file (e.g., .mp3, .wav) from a dropdown or upload option.
4. **Volume and Notification Options:**
- Adjust the volume for each sound.
- Option to play sounds even when the tab is not focused.
5. **Save Preferences:**
- Click **Save** to apply your custom settings across all tabs.
### Additional information
_No response_ | enhancement | low | Minor |
2,572,096,045 | pytorch | torch.compile graph break: torch._dynamo.exc.Unsupported: __self__ mismatch for bound method | ### ๐ Describe the bug
```python
import torch
import torch.nn as nn
def set_attrs_from_orig_model(cls_instance, mod, *func_names):
cls_instance.__dict__.update(mod.__dict__)
if func_names is not None:
for func in func_names:
setattr(cls_instance, func, getattr(mod, func))
class PatchedMyModule(nn.Module):
def __init__(self, mod):
super().__init__()
set_attrs_from_orig_model(self, mod, "resolve_input")
def forward(self, x):
x = self.resolve_input(x)
return x
class MyModule(nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.linear = nn.Linear(in_features=input_dim, out_features=output_dim)
def resolve_input(self, x):
x = torch.nn.Dropout(0.1)(self.linear(x))
return x
def forward(self, x):
x = self.linear(x)
return x
module = MyModule(input_dim=1, output_dim=1)
patched_module = PatchedMyModule(module)
compiled_module = torch.compile(patched_module, fullgraph=True)
input_tensor = torch.tensor([1.], dtype=torch.float)
res = compiled_module(input_tensor)
```
fail with: `torch._dynamo.exc.Unsupported: __self__ mismatch for bound method`
### Versions
torch==2.5.0.dev20240909+cpu
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames @rec | high priority,triaged,module: regression,oncall: pt2,module: dynamo,module: graph breaks | low | Critical |
2,572,099,480 | PowerToys | Mouse highlighter "Always highlight color" stays on after toggling opacity to zero again | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Mouse Utilities
### Steps to reproduce
When you change the transparency to 0, a bug will appear.
https://github.com/user-attachments/assets/a8e37894-7ea4-4a5b-854e-baaacc77bec4
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Help Wanted,Priority-3,Product-Mouse Utilities,Status-Reproducible | low | Critical |
2,572,148,021 | transformers | Add support for Apple's Depth-Pro | ### Model description
**Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.**
Depth Pro synthesizes high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics. And the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines real and synthetic datasets to achieve high metric accuracy alongside fine boundary tracing, dedicated evaluation metrics for boundary accuracy in estimated depth maps, and state-of-the-art focal length estimation from a single image.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
- Research Paper: [Depth Pro: Sharp Monocular Metric Depth in Less Than a Second](https://arxiv.org/pdf/2410.02073)
- Authors: [Aleksei Bochkovskii](https://arxiv.org/search/cs?searchtype=author&query=Bochkovskii,+A), [Amaรซl Delaunoy](https://arxiv.org/search/cs?searchtype=author&query=Delaunoy,+A), and others
- Implementation: [apple/ml-depth-pro](https://github.com/apple/ml-depth-pro)
- Models Weights: [apple/DepthPro](https://huggingface.co/apple/DepthPro) | New model,Vision | low | Major |
2,572,157,145 | PowerToys | Workspaces disables virtual desktop navigation | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
When enabled, the workspaces option disables keyboard shortcuts for virtual desktops <ctrl>+<win>+<right/left arrow> this makes using virtual desktops quite difficult especially if there is a need to quickly cycle back and forth through them.
### โ๏ธ Expected Behavior
To still be able to use virtual desktop default navigation.
### โ Actual Behavior
keyboard input appears not to work when using virtual desktop default navigation when power toys workspaces is enabled. Work-a-round was to disable workspaces.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,572,196,239 | flutter | iOS FlutterApplicationLifeCycleDelegate url callbacks are not called even when there are no other plugins | ### Steps to reproduce
1. Create a new ios only project `flutter create -e --platforms ios linking`
2. Copy the code from the example
3. Run the app on an iOS Simulator through XCode
4. See "Added delegate" printed in the console
5. Put the app in background
6. Run `xcrun simctl openurl booted "linking-app:action"`
7. See app open up but no logs printed
### Expected results
There should be logs indicating the delegate was called.
### Actual results
There are none.
### Code sample
<details open><summary>Code sample</summary>
```swift
// AppDelegate.swift
import Flutter
import UIKit
@main
@objc class AppDelegate: FlutterAppDelegate {
override func application(
_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?
) -> Bool {
if let registrar = self.registrar(forPlugin: "LinkingPlugin") {
LinkingPlugin.register(with: registrar)
}
GeneratedPluginRegistrant.register(with: self)
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
}
}
```
```swift
// Linking/LinkingPlugin.swift
import Flutter
import Foundation
import UIKit
class LinkingPlugin: NSObject, FlutterPlugin {
public static func register(with registrar: FlutterPluginRegistrar) {
let plugin = LinkingPlugin()
registrar.addApplicationDelegate(plugin)
print("Added delegate")
}
public func application(_ application: UIApplication, handleOpen url: URL) -> Bool {
print("Receiving handleOpen \(url.path)");
return false
}
public func application(_ application: UIApplication, open url: URL, sourceApplication: String, annotation: Any) -> Bool {
print("Receiving open with source \(url.path)");
return false
}
public func application(_ application: UIApplication, continue userActivity: NSUserActivity, restorationHandler: @escaping ([Any]) -> Void) -> Bool {
print("Receiving userActivity \(userActivity.webpageURL?.path ?? "nil")");
return false
}
public func application(
_: UIApplication,
open url: URL,
options _: [UIApplication.OpenURLOptionsKey: Any] = [:]
) -> Bool {
print("Receiving open \(url.absoluteString)");
return false
}
}
```
Add, the following to the `Info.plist`:
```xml
<key>CFBundleURLTypes</key>
<array>
<dict>
<key>CFBundleTypeRole</key>
<string>Editor</string>
<key>CFBundleURLSchemes</key>
<array>
<string>linking-app</string>
</array>
</dict>
</array>
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.3, on macOS 15.0.1 24A348 darwin-arm64, locale en)
โข Flutter version 3.24.3 on channel stable at /Users/User/Development/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 2663184aa7 (4 weeks ago), 2024-09-11 16:27:48 -0500
โข Engine revision 36335019a8
โข Dart version 3.5.3
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/User/Library/Android/sdk
โข Platform android-35, build-tools 34.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.0)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16A242d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[โ] VS Code (version 1.95.0-insider)
โข VS Code at /Applications/Visual Studio Code - Insiders.app/Contents
โข Flutter extension version 3.98.0
[โ] Connected device (3 available)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 15.0.1 24A348 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 15.0.1 24A348 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 129.0.6668.90
[โ] Network resources
โข All expected network resources are available.
```
</details>
| c: regression,platform-ios,engine,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.26 | low | Major |
2,572,211,695 | go | internal/coverage: internal error in coverage meta-data tracking encountered bad pkgID list of hard-coded runtime package IDs needs revising. | ### Go version
golang:1.22-alpine
### Output of `go env` in your module/workspace:
```shell
Not sure the best way to do this from the Dockerfile in the github action?
```
### What did you do?
Ran the following github action:
https://github.com/joe-at-startupmedia/pmon3/blob/5cc0fc72173f6ec021074152bb4d09945cac2324/.github/workflows/testing.yml
From the following Dockerfile:
https://github.com/joe-at-startupmedia/pmon3/blob/5cc0fc72173f6ec021074152bb4d09945cac2324/Dockerfile
Using the following Makefile command:
```
PROJECT_PATH=/opt/pmon3 go test -v -run Test -p 1 -coverprofile=coverage.txt -covermode atomic -coverpkg=pmon3/cli,pmon3/cli/cmd,pmon3/cli/cmd/base,pmon3/cli/cmd/completion,pmon3/cli/cmd/del,pmon3/cli/cmd/desc,pmon3/cli/cmd/dgraph,pmon3/cli/cmd/drop,pmon3/cli/cmd/exec,pmon3/cli/cmd/export,pmon3/cli/cmd/group,pmon3/cli/cmd/group/assign,pmon3/cli/cmd/group/create,pmon3/cli/cmd/group/del,pmon3/cli/cmd/group/desc,pmon3/cli/cmd/group/drop,pmon3/cli/cmd/group/list,pmon3/cli/cmd/group/remove,pmon3/cli/cmd/group/restart,pmon3/cli/cmd/group/stop,pmon3/cli/cmd/init,pmon3/cli/cmd/kill,pmon3/cli/cmd/list,pmon3/cli/cmd/log,pmon3/cli/cmd/logf,pmon3/cli/cmd/reset,pmon3/cli/cmd/restart,pmon3/cli/cmd/stop,pmon3/cli/cmd/topn,pmon3/cli/output/group/list,pmon3/cli/output/process/desc,pmon3/cli/output/process/list,pmon3/cli/output/process/one,pmon3/conf,pmon3/pmond,pmon3/pmond/controller,pmon3/pmond/controller/base,pmon3/pmond/controller/base/del,pmon3/pmond/controller/base/exec,pmon3/pmond/controller/base/restart,pmon3/pmond/controller/base/stop,pmon3/pmond/controller/group,pmon3/pmond/db,pmon3/pmond/god,pmon3/pmond/model,pmon3/pmond/observer,pmon3/pmond/process,pmon3/pmond/repo ./test/e2e/...
```
### What did you see happen?
```
internal error in coverage meta-data tracking:
encountered bad pkgID: 0 at slot: 352 fnID: 2 numCtrs: 2
list of hard-coded runtime package IDs needs revising.
[see the comment on the 'rtPkgs' var in
<goroot>/src/internal/coverage/pkid.go]
registered list:
slot: 0 path='pmon3/pmond/controller/base'
slot: 1 path='pmon3/pmond/model'
slot: 2 path='pmon3/conf'
slot: 3 path='pmon3/cli'
slot: 4 path='pmon3/pmond'
slot: 5 path='pmon3/pmond/observer'
slot: 6 path='pmon3/cli/output/group/list'
slot: 7 path='pmon3/cli/output/process/list'
slot: 8 path='pmon3/cli/output/process/desc'
slot: 9 path='pmon3/cli/output/process/one'
slot: 10 path='pmon3/cli/cmd/base'
slot: 11 path='pmon3/cli/cmd/del'
slot: 12 path='pmon3/cli/cmd/desc'
slot: 13 path='pmon3/cli/cmd/export'
slot: 14 path='pmon3/cli/cmd/group/desc'
slot: 15 path='pmon3/cli/cmd/group/assign'
slot: 16 path='pmon3/cli/cmd/group/drop'
slot: 17 path='pmon3/cli/cmd/group/list'
slot: 18 path='pmon3/cli/cmd/group/create'
slot: 19 path='pmon3/cli/cmd/group/del'
slot: 20 path='pmon3/cli/cmd/group/remove'
slot: 21 path='pmon3/cli/cmd/group/stop'
slot: 22 path='pmon3/cli/cmd/list'
slot: 23 path='pmon3/cli/cmd/drop'
slot: 24 path='pmon3/cli/cmd/exec'
slot: 25 path='pmon3/cli/cmd/group/restart'
slot: 26 path='pmon3/cli/cmd/init'
slot: 27 path='pmon3/cli/cmd/kill'
slot: 28 path='pmon3/cli/cmd/reset'
slot: 29 path='pmon3/cli/cmd/topn'
slot: 30 path='pmon3/pmond/db'
slot: 31 path='pmon3/pmond/repo'
slot: 32 path='pmon3/pmond/process'
slot: 33 path='pmon3/pmond/controller/base/exec'
slot: 34 path='pmon3/pmond/controller/base/restart'
slot: 35 path='pmon3/pmond/controller/base/stop'
slot: 36 path='pmon3/pmond/controller/base/del'
slot: 37 path='pmon3/pmond/controller/group'
slot: 38 path='pmon3/pmond/controller'
slot: 39 path='pmon3/pmond/god'
remap table:
internal error in coverage meta-data tracking:
encountered bad pkgID: 0 at slot: 360 fnID: 3 numCtrs: 2
list of hard-coded runtime package IDs needs revising.
[see the comment on the 'rtPkgs' var in
<goroot>/src/internal/coverage/pkid.go]
registered list:
slot: 0 path='pmon3/pmond/controller/base'
slot: 1 path='pmon3/pmond/model'
slot: 2 path='pmon3/conf'
slot: 3 path='pmon3/cli'
slot: 4 path='pmon3/pmond'
slot: 5 path='pmon3/pmond/observer'
slot: 6 path='pmon3/cli/output/group/list'
slot: 7 path='pmon3/cli/output/process/list'
slot: 8 path='pmon3/cli/output/process/desc'
slot: 9 path='pmon3/cli/output/process/one'
slot: 10 path='pmon3/cli/cmd/base'
slot: 11 path='pmon3/cli/cmd/del'
slot: 12 path='pmon3/cli/cmd/desc'
slot: 13 path='pmon3/cli/cmd/export'
slot: 14 path='pmon3/cli/cmd/group/desc'
slot: 15 path='pmon3/cli/cmd/group/assign'
slot: 16 path='pmon3/cli/cmd/group/drop'
slot: 17 path='pmon3/cli/cmd/group/list'
slot: 18 path='pmon3/cli/cmd/group/create'
slot: 19 path='pmon3/cli/cmd/group/del'
slot: 20 path='pmon3/cli/cmd/group/remove'
slot: 21 path='pmon3/cli/cmd/group/stop'
slot: 22 path='pmon3/cli/cmd/list'
slot: 23 path='pmon3/cli/cmd/drop'
slot: 24 path='pmon3/cli/cmd/exec'
slot: 25 path='pmon3/cli/cmd/group/restart'
slot: 26 path='pmon3/cli/cmd/init'
slot: 27 path='pmon3/cli/cmd/kill'
slot: 28 path='pmon3/cli/cmd/reset'
slot: 29 path='pmon3/cli/cmd/topn'
slot: 30 path='pmon3/pmond/db'
slot: 31 path='pmon3/pmond/repo'
slot: 32 path='pmon3/pmond/process'
slot: 33 path='pmon3/pmond/controller/base/exec'
slot: 34 path='pmon3/pmond/controller/base/restart'
slot: 35 path='pmon3/pmond/controller/base/stop'
slot: 36 path='pmon3/pmond/controller/base/del'
slot: 37 path='pmon3/pmond/controller/group'
slot: 38 path='pmon3/pmond/controller'
slot: 39 path='pmon3/pmond/god'
remap table:
internal error in coverage meta-data tracking:
encountered bad pkgID: 0 at slot: 632 fnID: 1 numCtrs: 3
list of hard-coded runtime package IDs needs revising.
[see the comment on the 'rtPkgs' var in
<goroot>/src/internal/coverage/pkid.go]
registered list:
slot: 0 path='pmon3/pmond/controller/base'
slot: 1 path='pmon3/pmond/model'
slot: 2 path='pmon3/conf'
slot: 3 path='pmon3/cli'
slot: 4 path='pmon3/pmond'
slot: 5 path='pmon3/pmond/observer'
slot: 6 path='pmon3/cli/output/group/list'
slot: 7 path='pmon3/cli/output/process/list'
slot: 8 path='pmon3/cli/output/process/desc'
slot: 9 path='pmon3/cli/output/process/one'
slot: 10 path='pmon3/cli/cmd/base'
slot: 11 path='pmon3/cli/cmd/del'
slot: 12 path='pmon3/cli/cmd/desc'
slot: 13 path='pmon3/cli/cmd/export'
slot: 14 path='pmon3/cli/cmd/group/desc'
slot: 15 path='pmon3/cli/cmd/group/assign'
slot: 16 path='pmon3/cli/cmd/group/drop'
slot: 17 path='pmon3/cli/cmd/group/list'
slot: 18 path='pmon3/cli/cmd/group/create'
slot: 19 path='pmon3/cli/cmd/group/del'
slot: 20 path='pmon3/cli/cmd/group/remove'
slot: 21 path='pmon3/cli/cmd/group/stop'
slot: 22 path='pmon3/cli/cmd/list'
slot: 23 path='pmon3/cli/cmd/drop'
slot: 24 path='pmon3/cli/cmd/exec'
slot: 25 path='pmon3/cli/cmd/group/restart'
slot: 26 path='pmon3/cli/cmd/init'
slot: 27 path='pmon3/cli/cmd/kill'
slot: 28 path='pmon3/cli/cmd/reset'
slot: 29 path='pmon3/cli/cmd/topn'
slot: 30 path='pmon3/pmond/db'
slot: 31 path='pmon3/pmond/repo'
slot: 32 path='pmon3/pmond/process'
slot: 33 path='pmon3/pmond/controller/base/exec'
slot: 34 path='pmon3/pmond/controller/base/restart'
slot: 35 path='pmon3/pmond/controller/base/stop'
slot: 36 path='pmon3/pmond/controller/base/del'
slot: 37 path='pmon3/pmond/controller/group'
slot: 38 path='pmon3/pmond/controller'
slot: 39 path='pmon3/pmond/god'
remap table:
coverage: 73.6% of statements in pmon3/cli, pmon3/cli/cmd, pmon3/cli/cmd/base, pmon3/cli/cmd/completion, pmon3/cli/cmd/del, pmon3/cli/cmd/desc, pmon3/cli/cmd/dgraph, pmon3/cli/cmd/drop, pmon3/cli/cmd/exec, pmon3/cli/cmd/export, pmon3/cli/cmd/group, pmon3/cli/cmd/group/assign, pmon3/cli/cmd/group/create, pmon3/cli/cmd/group/del, pmon3/cli/cmd/group/desc, pmon3/cli/cmd/group/drop, pmon3/cli/cmd/group/list, pmon3/cli/cmd/group/remove, pmon3/cli/cmd/group/restart, pmon3/cli/cmd/group/stop, pmon3/cli/cmd/init, pmon3/cli/cmd/kill, pmon3/cli/cmd/list, pmon3/cli/cmd/log, pmon3/cli/cmd/logf, pmon3/cli/cmd/reset, pmon3/cli/cmd/restart, pmon3/cli/cmd/stop, pmon3/cli/cmd/topn, pmon3/cli/output/group/list, pmon3/cli/output/process/desc, pmon3/cli/output/process/list, pmon3/cli/output/process/one, pmon3/conf, pmon3/pmond, pmon3/pmond/controller, pmon3/pmond/controller/base, pmon3/pmond/controller/base/del, pmon3/pmond/controller/base/exec, pmon3/pmond/controller/base/restart, pmon3/pmond/controller/base/stop, pmon3/pmond/controller/group, pmon3/pmond/db, pmon3/pmond/god, pmon3/pmond/model, pmon3/pmond/observer, pmon3/pmond/process, pmon3/pmond/repo
ok pmon3/test/e2e 218.039s coverage: 73.6% of statements in pmon3/cli, pmon3/cli/cmd, pmon3/cli/cmd/base, pmon3/cli/cmd/completion, pmon3/cli/cmd/del, pmon3/cli/cmd/desc, pmon3/cli/cmd/dgraph, pmon3/cli/cmd/drop, pmon3/cli/cmd/exec, pmon3/cli/cmd/export, pmon3/cli/cmd/group, pmon3/cli/cmd/group/assign, pmon3/cli/cmd/group/create, pmon3/cli/cmd/group/del, pmon3/cli/cmd/group/desc, pmon3/cli/cmd/group/drop, pmon3/cli/cmd/group/list, pmon3/cli/cmd/group/remove, pmon3/cli/cmd/group/restart, pmon3/cli/cmd/group/stop, pmon3/cli/cmd/init, pmon3/cli/cmd/kill, pmon3/cli/cmd/list, pmon3/cli/cmd/log, pmon3/cli/cmd/logf, pmon3/cli/cmd/reset, pmon3/cli/cmd/restart, pmon3/cli/cmd/stop, pmon3/cli/cmd/topn, pmon3/cli/output/group/list, pmon3/cli/output/process/desc, pmon3/cli/output/process/list, pmon3/cli/output/process/one, pmon3/conf, pmon3/pmond, pmon3/pmond/controller, pmon3/pmond/controller/base, pmon3/pmond/controller/base/del, pmon3/pmond/controller/base/exec, pmon3/pmond/controller/base/restart, pmon3/pmond/controller/base/stop, pmon3/pmond/controller/group, pmon3/pmond/db, pmon3/pmond/god, pmon3/pmond/model, pmon3/pmond/observer, pmon3/pmond/process, pmon3/pmond/repo
```
### What did you expect to see?
coverage report generated without any errors | NeedsInvestigation,compiler/runtime | low | Critical |
2,572,236,743 | vscode | Action/Command to show all files of a folder in a single view | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
This idea is actually coming from the new diff view showing multiple files.
Sometimes, in some languages, there are many small files related to one another. Usually, they reside in a folder. An action or command to open all these files in a single editor view, just like the multiple files diff view, will help to navigate the files easily.
This can be further customized by letting extensions open an arbitrary list of files from the workspace in a single editor view. | feature-request | low | Minor |
2,572,259,861 | tensorflow | tensorflow.python.ops.parsing_ops.parse_single_sequence_example can cause a crash | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf-nightly 2.19.0-dev20241007
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04.3 LTS
### Mobile device
Linux Ubuntu 20.04.3 LTS
### Python version
3.10.14
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I have confirmed that above code would crash on `tf-nightly 2.19.0-dev20241007` (nightly-build)
Please find the [gist](https://colab.research.google.com/drive/17PzKxkDEr3N8E9D9Kk_mT2A1LyPpoZZe?usp=sharing) to reproduce the issue.
### Standalone code to reproduce the issue
```shell
from tensorflow.core.example import example_pb2
from tensorflow.core.example import feature_pb2
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import parsing_ops
example = example_pb2.Example
feature = feature_pb2.Feature
features = lambda d: feature_pb2.Features(feature=d)
bytes_feature = lambda v: feature(bytes_list=feature_pb2.BytesList(value=v))
int64_feature = lambda v: feature(int64_list=feature_pb2.Int64List(value=v))
float_feature = lambda v: feature(float_list=feature_pb2.FloatList(value=v))
feature_list = lambda l: feature_pb2.FeatureList(feature=l)
feature_lists = lambda d: feature_pb2.FeatureLists(feature_list=d)
sequence_example = example_pb2.SequenceExample
def testSequenceExampleListWithWrongShapeFails():
original = sequence_example(feature_lists=feature_lists({'a': feature_list([int64_feature([2, 3]), int64_feature([2, 3, 4])])}))
serialized = original.SerializeToString()
parsing_ops.parse_single_sequence_example(**
({
'example_name': 'in1',
'serialized': ops.convert_to_tensor(serialized),
'sequence_features': {'a': parsing_ops.FixedLenSequenceFeature((0, 0), dtypes.int64)}
}))
testSequenceExampleListWithWrongShapeFails()
```
### Relevant log output
```shell
Floating point exception (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops | medium | Critical |
2,572,267,701 | kubernetes | Add Opaque as default type in kubectl create secret . | ### What happened?
Reference : https://github.com/kubernetes/kubernetes/pull/120337/files#r1408525014
Opaque is not added as default type in code as we see empty string when stating type in help message.
```shell
Options:
...
--type='':
The type of secret to create
...
```
### What did you expect to happen?
Options:
```shell
...
--type='Opaque':
The type of secret to create
...
```
### How can we reproduce it (as minimally and precisely as possible)?
```shell
kubectl create secret generic --help
```
### Anything else we need to know?
NONE
### Kubernetes version
<details>
```console
$ kubectl version
```
</details>
### Cloud provider
<details>
any
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,area/kubectl,sig/cli,needs-triage | low | Major |
2,572,273,323 | tensorflow | tensorflow.python.ops.signal.dct_ops.dct aborts with "Assertion failure no zero-sized FFTs" | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf-nightly 2.19.0-dev20241007
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04.3 LTS
### Mobile device
_No response_
### Python version
3.10.14
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I have confirmed that above code would crash on `tf-nightly 2.19.0-dev20241007` (nightly-build)
Please find the [gist](https://colab.research.google.com/drive/1oBjZoqp6WZn_VU-CTxZ3bspUc6v9D51s?usp=sharing) to reproduce the issue.
### Standalone code to reproduce the issue
```shell
import numpy as np
from tensorflow.python.eager import def_function
from tensorflow.python.framework import tensor_spec
from tensorflow.python.ops.signal import dct_ops
def test_with_dynamic_dimensions(dct_type, norm, shape, dtype):
@def_function.function
def func(signals):
return dct_ops.dct(signals, n=norm, type=dct_type, norm=None)
signals_spec = tensor_spec.TensorSpec([None] * len(shape), dtype)
f = func.get_concrete_function(signals_spec)
f(np.zeros([0], dtype=dtype))
test_with_dynamic_dimensions(3, None, [3], np.float32)
```
### Relevant log output
```shell
DUCC FFT c2r failed:
bazel-out/k8-opt/bin/external/ducc/_virtual_includes/fft/ducc/src/ducc0/fft/fft1d_impl.h: 2948 (static Trpass<Tfs> ducc0::detail_fft::rfftpass<float>::make_pass(size_t, size_t, size_t, const Troots<Tfs> &, bool) [Tfs = float]):
Assertion failure
no zero-sized FFTs
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,572,360,685 | PowerToys | PowerRename: Possibility to choose between โCreatedโ and โ Changedโ date | ### Description of the new feature / enhancement
Hello!
The โCreatedโ date is currently used for _$YYYY.$MM.$DD (see screenshot). But for us, the โChangedโ date is often relevant, see screenshot.
It would be helpful to be able to switch in the PowerRename interface whether the โCreatedโ or โChangedโ date is used for the current renaming.
I would ask you to check this.
Thank you very much.
Matthias

### Scenario when this would be used?
We often receive files (e.g. floor plans) without an index/date in the file name. We would like to add _$YYYY.$MM.$DD to these PDFs with PowerRename so that the โchangedโ date is visible in the file name.
### Supporting information
v0.83.0
Windows 11, 64bit, German | Needs-Triage | low | Minor |
2,572,373,526 | next.js | Consecutive slashes in URL trigger routing error and no page renders in browser when Next.js runs behind gateway | ### Link to the code that reproduces this issue
https://github.com/cosieLq/exampleApp_nextjs/tree/reproduction-double-slash-routing-error
### To Reproduce
1. Start the application (npm run dev or npm run start)
2. Start the gateway (node proxy.js)
3. Go to localhost:8000///about
4. Observe browser console and see 'Error: invariant: invalid relative URL, router received...'
### Current vs. Expected behavior
I expected no error in browser console and the page to render correctly.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Wed Jul 31 20:48:52 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.15.1
npm: 10.7.0
Yarn: 1.22.19
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.179 // Latest available version is detected (15.0.0-canary.179).
eslint-config-next: N/A
react: 19.0.0-beta-04b058868c-20240508
react-dom: 19.0.0-beta-04b058868c-20240508
typescript: 5.1.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Internationalization (i18n), Navigation, Pages Router
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Other (Deployed)
### Additional context
When i18n is disabled in next.config.js, no error will appear and the page renders correctly. | bug,Navigation,Internationalization (i18n),Pages Router | low | Critical |
2,572,417,889 | pytorch | `torch.ops.aten.divide_` leads to "RuntimeError: result type Float can't be cast to the desired output type Int" | ### ๐ Describe the bug
# Bug Description
Similar to [42246](https://github.com/pytorch/pytorch/issues/42246), which has been fixed, I encountered the similar error while running `torch.ops.aten.divide_` and `torch.ops.aten.true_divide_`.
# The Bug Code 1 - torch.ops.aten.divide_
```
import torch
tensor1 = torch.tensor([10, 20, 30], dtype=torch.int)
tensor2 = torch.tensor([2, 4, 5], dtype=torch.int)
result = torch.ops.aten.divide_(tensor1, tensor2)
print(result)
```
# The Bug Code 2 - torch.ops.aten.divide_
```
import torch
tensor1 = torch.tensor([10, 20, 30], dtype=torch.int)
tensor2 = torch.tensor([2, 4, 5], dtype=torch.int)
result = torch.ops.aten.true_divide_(tensor1, tensor2)
print(result)
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 545.23.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6444Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 3601.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 64 MiB (32 instances)
L3 cache: 90 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @manuelcandales @SherlockNoMad @angelayi | low priority,triaged,actionable,module: core aten | low | Critical |
2,572,435,215 | neovim | LSP: Cannot use Ctrl-O to scroll through omnifunc menu | ### Problem
After pressing `Ctrl-X Ctrl-O` one is supposed to be able to continue pressing `Ctrl-O` to cycle through the options presented. Convenient because you don't have to move your fingers after pressing the combo that brought up the menu in the first place.
I have tested this with both `rust-analyzer` and `clangd` changing _only_ `pattern` and `cmd` in the suggested minimal setup.
I believe this is a general LSP issue as it happens with two different LSP servers, and it does _not_ happen using the following minimal (albeit stupid) setup:
```vim
function! Nonsense(findstart, base)
if a:findstart == 1
return 0
endif
return ["lol", "rofl", "mao"]
endfunction
set omnifunc=Nonsense
```
Here I can cycle the matches using all of `Ctrl-[NPO]` as expected, in accordance with the docs:
CTRL-O or
CTRL-N Use the next match. This match replaces the previous
one.
CTRL-P Use the previous match. This match replaces the
previous one.
### Steps to reproduce using "nvim -u minimal_init.lua"
Going forward with `clangd` here:
```lua
--- CHANGE THESE
local pattern = 'cpp'
local cmd = {'clangd'}
-- Add files/folders here that indicate the root of a project
local root_markers = {'.git', '.editorconfig'}
-- Change to table with settings if required
local settings = vim.empty_dict()
vim.api.nvim_create_autocmd('FileType', {
pattern = pattern,
callback = function(args)
local match = vim.fs.find(root_markers, { path = args.file, upward = true })[1]
local root_dir = match and vim.fn.fnamemodify(match, ':p:h') or nil
vim.lsp.start({
name = 'bugged-ls',
cmd = cmd,
root_dir = root_dir,
settings = settings
})
end
})
```
After pressing `Ctrl-X Ctrl-O` on the following partial line (in an otherwise correct C++ file) with the cursor at the end of the line:
```cpp
std::vector<int> v; v.p
```
I see a menu suggesting `pop_back()` and `push_back()`. Press `Ctrl-O` again and first suggestion is inserted, the menu closed and `Ctrl-O` is interpreted as if I was not using `omnifunc` (i.e. as in `:h i_CTRL_O`). `Ctrl-N` and `Ctrl-P` both still work as expected.
### Expected behavior
I expect `push_back()` to be selected from the `omnifunc` menu.
### Nvim version (nvim -v)
0.10.1
### Language server name/version
clangd 18.1.8
### Operating system/version
Ubuntu 24.04
### Log file
https://gist.github.com/Osse/1ce02d434522e82a8cf7f9eea00a715b | bug,input,has:repro,lsp,completion,insert-mode | low | Critical |
2,572,439,725 | flutter | `SNAPSHOT` in plugin `build.gradle` causes false positive warnings in some build analysis systems | Hi! I would like to use these plugins
- path_provider_android
- url_launcher_android
in my Flutter application.
My CI/CD procedure checks for any vulnerable dependencies. For these plugins, an unstable version is detected due to the version indicated in the build.gradle file of the native Android components (1.0-SNAPSHOT). Is it possible to align the version in the build.gradle file with the same version as the Dart side plugin? | tool,P3,a: plugins,team-android,triaged-android | low | Major |
2,572,441,173 | pytorch | Better error message in `torch.linalg.vector_norm` | ### ๐ Describe the bug
When processing complex data type, torch.linalg.vector_norm raises an overflow error.
```python
import torch
>>> torch.linalg.vector_norm(torch.randn(3, 3), torch.tensor(2 + 3j), dim=(0, 1), keepdim=False)
#result
RuntimeError: value cannot be converted to type double without overflow
```
### Versions
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-106-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 2080 Ti
GPU 5: NVIDIA GeForce RTX 2080 Ti
GPU 6: NVIDIA GeForce RTX 2080 Ti
GPU 7: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 1
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 4399.64
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Virtualization: VT-x
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 6 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] torchviz==0.0.2
[pip3] triton==2.3.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.3.1 pypi_0 pypi
[conda] torchvision 0.18.1 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] triton 2.3.1 pypi_0 pypi
cc @malfet | low priority,module: error checking,triaged,actionable,module: edge cases | low | Critical |
2,572,510,414 | tauri | [feat] screensharing without pop up window | ### Describe the problem
Currently to share the screen, we can call `navigator.mediaDevices.getDisplayMedia` which will popup a mandatory window to ask user to choose which screen to share.
### Describe the solution you'd like
Is there possible to get the screenid from the tauri api(like electron desktopCapturer), and call `navigator.getUserMedia` to produce the mediastream with the screenid directly like following so as to avoid the popup window?
```
const stream = navigator.getUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: screenId,
}
}
})
```
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,572,515,592 | stable-diffusion-webui | Color Discrepancies in Facial Restoration with ADetailer[Bug]: | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
I encountered noticeable color differences when using the ADetailer extension for facial restoration. I have tried switching different models, various versions of the web UI, different versions of ADetailer, and different versions of PyTorch, as well as upgrading to the latest graphics card driver, but the issue persists. Whenever ADetailer is enabled, the generated images have obvious color discrepancies. My system environment is Ubuntu 24.02, NVIDIA-SMI 560.35.03, Driver Version: 560.35.03, CUDA Version: 12.6, GPU: 4090.
open ADetailer:

close ADetailer:

The difference is here; it is very noticeable on my monitor.

### Steps to reproduce the problem
In text2img, enable ADetailer, select face_yolov8m.pt, and start generating images.
### What should have happened?
No color differences before and after restoration.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
n text2img, enable ADetailer, select face_yolov8m.pt, and start generating images.
### Console logs
```Shell
The logs outputted in the terminal show no errors.
```
### Additional information
_No response_ | bug-report | low | Critical |
2,572,529,887 | electron | [Bug]: When `window.webContents.navigationHistory.goBack()` is called, the title set by the HTML's `<title>` tag will override the title set by `window.setTitle()`. | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.0.1
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11 23H2 22631.4169 Windows Feature Experience Pack 1000.22700.1034.0
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
```
const window = new BrowserWindow();
window.setTitle('myTitle');
// After a route change in my HTML file, execute:
window.webContents.navigationHistory.goBack();
// => The title of the window should not be overridden by the HTML's title when `goBack()` is called.
```
### Actual Behavior
```
const window = new BrowserWindow();
window.setTitle('myTitle');
// After a route change in my HTML file, execute:
window.webContents.navigationHistory.goBack();
// => At this point, the title of the window is no longer 'myTitle', but the content of the title tag in the HTML file.
```
### Testcase Gist URL
_No response_
### Additional Information
The front-end uses Vue.js + Vue-Router | platform/windows,status/confirmed,has-repro-gist,has-repro-repo,32-x-y,33-x-y,34-x-y | low | Critical |
2,572,653,355 | vscode | Inline chat: allow to resize at borders to make larger | Now that inline chat looks more like a peek widget, I would like to make it larger by dragging at its borders with a sash, similar to how we can do this for peek references:


| feature-request,verification-found,verification-needed,inline-chat | low | Minor |
2,572,664,806 | three.js | FBXLoader : node translation should not be applied to morph elements | ### Description
I have some trouble with some of my fbx, when i set the parameter morphTargetInfluences to some weight, i saw that the render of the influence does not match what was made with 3dsmax.
I have done some research to understand why, and manage to find the 'GeometricTranslation' is used in the function genMorphGeometry() when applying a 4x4 matrix (positionAttribute.applyMatrix4( preTransform );)
On the screenshot you can see empty space on top and bottom of the wires of the lamp that correspond to the value in 'GeometricTranslation'
Fbx blend shapes contains a list of vertice index to move and corresponding vertices position depends on the weight. It is raw data, for sure some transformation must apply with the matrix to match the expected render.
But blend shapes vertices positions are already the delta positions from the base vertices positions.
So it means the 'GeometricTranslation' is applied once to the basevertices positions, then currently one more time on the blend shapes vertices positions, that is on my point of view, not correct. I didn't find any documentation about it (hard to find for fbx), but on my side i had already written some c# fbx reader some time ago where i exclude this value.
I currently had a "dirty fix" in local, because not sure it is the good way to do, that is why i didn't made a pull request (and I'm not used to using GitHub also), i pass a second parameter to genGeometry() without translation (see code section)
### Reproduction steps
1. import a fbx with blend shapes and 'GeometricTranslation' not empty on the FbxNode that have blendshape
2. set 'morphTargetInfluences' to 1
3. display in scene
### Code
> remove the translation for morpher in genGeometry and pass also the matrix
genGeometry( geoNode, skeleton, morphTargets, preTransform, preTransform2) {
...
...
const transform = generateTransform( transformData );
transformData.translation = undefined;
const transform2 = generateTransform( transformData );
return this.genGeometry( geoNode, skeleton, morphTargets, transform, transform2 );
}
> until it reach the genMorphGeometry function
genMorphGeometry( parentGeo, parentGeoNode, morphGeoNode, preTransform2, name ) {
...
positionAttribute.applyMatrix4( preTransform2 );
...
}
### Live example
*
### Screenshots

### Version
r169
### Device
_No response_
### Browser
_No response_
### OS
_No response_ | Loaders | low | Minor |
2,572,702,394 | node | `assert.any` for `assert.deepEqual` to match on types instead of values | ### What is the problem this feature will solve?
Given for example a `Date.now` in a nested object I would like be able to run `deepEqual` on the object.
Such situation often happens if you are testing responses from APIs or something similar.
This obviously will throw an AssertionError:
```javascript
assert.deepEqual({
a: 1,
b: Date.now()
}, {
a: 1,
b: 1728380577053,
});
```
### What is the feature you are proposing to solve the problem?
One option would be to introduce an asymmetric matcher `assert.any` that could dynamically check on the given type in the expected object.
Checking the type is often enough in such situation.
This would pass:
```javascript
assert.deepEqual({
a: 1,
b: Date.now()
}, {
a: 1,
b: assert.any(Number),
});
```
Another option would be to let you define custom asymmetric matchers:
```javascript
assert.deepEqual({
a: 1,
b: Date.now()
}, {
a: 1,
b: assert.asymetricMatcher((v) => typeof v === "Number"),
});
```
### What alternatives have you considered?
I tried to use another assertion library which are capable of doing this: Vitest, unexpected.js, Jest
But this gives mangled output and I am not satisfied to bring in such big libraries for the sake of only doing asymmetric matching. Also I explain my tests in a documentation for other developers and I would like to use the Node built-in tools for simplicity. | assert,feature request | medium | Critical |
2,572,747,518 | go | proposal: slices: functions Shift, Rotate. | ### Proposal Details
```
// Shift return a slice with n zero values of E inserted at the given position.
func Shift[S ~[]E, E any](s S, at, n int) S
// Rotate right-rotates the slice by n places. To left-rotate use -n.
func Rotate[S ~[]E, E any](s S, n int) S
```
Judging from the source code of the `slices` package there already is an internal "rotateLeft/Right" function. I am unsure if there is a good reason not to export this useful function (especially as it is not trivial to implement without allocations).
A "Shift" function is not trivial to implement efficiently either, I think (without intermediate allocations, ie. making an zero slice of length and inserting it) . Maybe this can be done using other functions in the package that I am unaware of. In that case just an example would be nice.
(Looking at the implementation of `Insert` I think a Shift function could simplify its implementation a little. Also `Insert` checks for overlap but this seems covered by `copy` already? Although maybe there's more to it then I can see by glancing at it).
### Updates after feedback
Apparently `shift` is commonly used to refer to a different operation on arrays/slices (especially in scripting languages) and bits. Alternative names for `Shift` such as `InsertN` or `InsertZeros` should be considered. | Proposal | low | Major |
2,572,765,212 | next.js | Catch-all Segments are mistakenly being triggered when it follows a Dynamic Segment (both get triggered) | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/suspicious-pine-go8s7s?layout=%257B%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522rootPanelGroup%2522%253A%257B%2522direction%2522%253A%2522horizontal%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522id%2522%253A%2522ROOT_LAYOUT%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522cm20a6r6w00063b6wzyhxsa0n%2522%252C%2522sizes%2522%253A%255B70%252C30%255D%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522EDITOR%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522id%2522%253A%2522cm20a6r6v00023b6wqv6kbi1q%2522%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522SHELLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522id%2522%253A%2522cm20a6r6v00043b6wab58upw0%2522%257D%255D%252C%2522sizes%2522%253A%255B100%255D%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522DEVTOOLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522id%2522%253A%2522cm20a6r6w00053b6wmeeae01k%2522%257D%255D%252C%2522sizes%2522%253A%255B100%255D%257D%255D%252C%2522sizes%2522%253A%255B50%252C50%255D%257D%252C%2522tabbedPanels%2522%253A%257B%2522cm20a6r6v00023b6wqv6kbi1q%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cm20a6r6v00013b6w1hndt8h0%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522filepath%2522%253A%2522%252FREADME.md%2522%252C%2522state%2522%253A%2522IDLE%2522%257D%255D%252C%2522id%2522%253A%2522cm20a6r6v00023b6wqv6kbi1q%2522%252C%2522activeTabId%2522%253A%2522cm20a6r6v00013b6w1hndt8h0%2522%257D%252C%2522cm20a6r6w00053b6wmeeae01k%2522%253A%257B%2522id%2522%253A%2522cm20a6r6w00053b6wmeeae01k%2522%252C%2522activeTabId%2522%253A%2522cm20a84p6008x3b6w9sbw3g1u%2522%252C%2522tabs%2522%253A%255B%257B%2522type%2522%253A%2522UNASSIGNED_PORT%2522%252C%2522port%2522%253A3000%252C%2522id%2522%253A%2522cm20a84p6008x3b6w9sbw3g1u%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522path%2522%253A%2522%252Fus%2522%257D%255D%257D%252C%2522cm20a6r6v00043b6wab58upw0%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cm20a6r6v00033b6wuq4qm7xh%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522TASK_LOG%2522%252C%2522taskId%2522%253A%2522dev%2522%257D%255D%252C%2522id%2522%253A%2522cm20a6r6v00043b6wab58upw0%2522%252C%2522activeTabId%2522%253A%2522cm20a6r6v00033b6wuq4qm7xh%2522%257D%257D%252C%2522showDevtools%2522%253Atrue%252C%2522showShells%2522%253Atrue%252C%2522showSidebar%2522%253Atrue%252C%2522sidebarPanelSize%2522%253A15%257D
### To Reproduce
- Using app router
- Have a catch-all segment immediately follow a dynamic segment (`app/[dynamic]/[...catchAll]/`)
- page.tsx inside dynamic segment
- page.tsx inside catch-all segment
- Console log something in both components
- Note console logs in both files get fired when navigating to the dynamic segment's page.tsx.
It seems to work in sandbox but not locally... Please can this be checked out
### Current vs. Expected behavior
`app/[dynamic]/[...catchAll]/`
When navigating to path `/us`, for example, `/app/[dynamic]/page.tsx` should be the relevant page triggered, and yes it is the one that gets used, but the console logs inside `/app/[dynamic]/[...catchAll]/page.tsx` are being triggered also. This is not the expected behaviour according to docs:
https://nextjs.org/docs/app/building-your-application/routing/dynamic-routes#catch-all-segments
see the 'optional catch-all' bit - literally the only difference is to do what I stated above, if it's desired. Therefore the (non-optional) 'catch-all' should not be triggering the `/us` path as mentioned above.
> For example, app/shop/[[...slug]]/page.js will also match /shop, in addition to /shop/clothes, /shop/clothes/tops, /shop/clothes/tops/t-shirts.
> The difference between catch-all and optional catch-all segments is that with optional, the route without the parameter is also matched (/shop in the example above).
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 21.6.0: Mon Jun 24 00:56:10 PDT 2024; root:xnu-8020.240.18.709.2~1/RELEASE_X86_64
Binaries:
Node: 20.11.0
npm: 10.2.4
Yarn: 1.22.19
pnpm: 9.10.0
Relevant Packages:
next: 14.1.0
eslint-config-next: 14.1.0
react: 18.2.0
react-dom: 18.2.0
typescript: 4.9.5
Next.js Config:
output: N/A
warn - Latest canary version not detected, detected: "14.1.0", newest: "15.0.0-canary.179".
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Not sure, Developer Experience, Documentation, Module Resolution, Navigation, Parallel & Intercepting Routes, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Other (Deployed)
### Additional context
_No response_ | Navigation,Runtime,Module Resolution,Parallel & Intercepting Routes | low | Minor |
2,572,767,663 | pytorch | [feature request] [ux] Introduce special value `SDPBackend.ALL` (or similar functionality) to mean "all available backends" for `torch.nn.attention.sdpa_kernel(...)` (some special value `SDPBackend.ERROR` already exists) | ### ๐ The feature, motivation and pitch
This would simplify code using `torch.nn.attention.sdpa_kernel` as the list of backends may evolve (e.g. Flex Attention might become a backend for SDPA?) and hardcoding the list of all available backends in end-user code is brittle.
Otherwise some hacks required as supporting two code-paths: one with `sdpa_kernel` context manager, another without, and this complicates the code / graph.
Alternative solutions might be:
- supporting `with torch.nn.attention.sdpa_kernel([]):` to mean all backends
- supporting SDPBackend.all_backends or something to provide a list of all backends (should also be supported in torch.compile) - so that the user does not need to hardcode this list or iterate the members of enum and subtracting special value ERROR
Another minor incovenience is that `torch.nn.attention.sdpa_kernel(...)` accepts only a `list[SDPBackend]` while in configs often a list of strings is simpler to specify, so preprocessing with `getattr` is currently needed
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | triaged,enhancement,module: sdpa | low | Critical |
2,572,818,233 | ollama | GPU VRAM Usage Timeout Warnings on Embeddings Model Load | ### What is the issue?
Description:
We are experiencing repeated GPU VRAM recovery timeouts while running multiple models on the ollama platform. The GPU in use is 2x NVIDIA RTX A5000. The system logs show that the VRAM usage does not recover within the expected timeout (5+ seconds), which affects performance and stability.
The issue occurs when loading and running embedding models, particularly when switching between different models. Below is an excerpt of the log showing the repeated warnings and the affected models:
```
Okt 08 12:26:37 Aerion3 ollama[104243]: llama_model_loader: - type f32: 243 tensors
Okt 08 12:26:37 Aerion3 ollama[104243]: llama_model_loader: - type f16: 146 tensors
Okt 08 12:31:41 Aerion3 ollama[104243]: time=2024-10-08T12:31:41.710+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.167422277 model=/usr/share/ollama/.ollama/models/blobs/sha256-03aeef8493ea9a2b8da023e8d21ce77a97e83de66a692417579aa27b717cdaf3
Okt 08 12:31:41 Aerion3 ollama[104243]: time=2024-10-08T12:31:41.959+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.417004589 model=/usr/share/ollama/.ollama/models/blobs/sha256-03aeef8493ea9a2b8da023e8d21ce77a97e83de66a692417579aa27b717cdaf3
Okt 08 12:31:46 Aerion3 ollama[104243]: time=2024-10-08T12:31:46.768+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.057837537 model=/usr/share/ollama/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
```
Possible Causes under consideration:
- Insufficient VRAM: The GPU may not have enough VRAM to efficiently load and unload multiple models, leading to delays in VRAM recovery. **This seems unlikely because the `nvtop` never shows GPU consumption above 4% when the warning appears**
- Memory Fragmentation: Fragmented memory in the VRAM might be causing issues when trying to allocate new contiguous memory.
- GPU Overload: The workload may be too heavy for the GPU, especially if multiple models are loaded simultaneously.
- CUDA Memory Management: Inefficient management of CUDA memory offloading may be causing this issue.
System Information:
- GPU: 2x NVIDIA RTX A5000
- ollama Version: 0.3.12
- Model in Use: `jina-embeddings-v2-base-en:latest`, `mxbai-embed-large-v1` and other models
- VRAM Available: ~24 GiB x2
Steps to Reproduce:
- Load and run multiple models in parallel or sequentially.
- Monitor system logs for VRAM recovery warnings as models are switched or loaded.
Expected Behavior:
The system should manage VRAM more efficiently, releasing it within the timeout to avoid warnings and improve overall performance.
Request:
Please investigate possible improvements to VRAM memory management or provide guidance on how to better configure the system to avoid these timeouts.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.12 | bug,memory | low | Major |
2,572,908,443 | godot | Compatibility renderer 3D MSAA suspected memory leak (Android) | ### Tested versions
v4.4.dev3.official [f4af8201b]
### System information
OpenGL API OpenGL ES 3.2 v1.r32p1-01eac0.394145956bc7cd8e697b330aba11e3d3 - Compatibility - Using Device: ARM - Mali-G57 MC2
### Issue description
When enabling MSAA 3D on Android the 'Native' memory use in the android studio profiler continuously rises. Whether this is the cause of my random crashing after X minutes of idling I don't yet know. Need to spend more time idling to find out (it was, ~10mins became 2+hours stable).
MSAA disabled:

Enabled:

### Steps to reproduce
See Description.
### Minimal reproduction project (MRP)
TBD | bug,platform:android,topic:rendering,topic:3d | low | Critical |
2,572,917,916 | ollama | Getting Error with OpenAI compatibility | ### What is the issue?
```js
import { NextApiRequest } from 'next';
import { OpenAIStream, StreamingTextResponse } from 'ai';
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'http://localhost:11434/v1',
apiKey: 'ollama', // required but unused
});
export async function POST(req: NextApiRequest) {
const body = await req.json();
console.log("messages", body);
try {
const response = await openai.chat.completions.create({
model: 'llama3',
messages: body.messages,
});
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
} catch (error) {
console.error("error", error);
}
}
```
Log before Error:
```
messages { messages: [ { role: 'user', content: "What is today's date?" } ] }
```
Getting Error
```
error APIConnectionError: Connection error.
at OpenAI.makeRequest (webpack-internal:///(rsc)/./node_modules/openai/core.mjs:321:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async POST (webpack-internal:///(rsc)/./src/app/api/chat/route.ts:20:26)
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.0 | bug,api | low | Critical |
2,572,978,280 | godot | Close button not fully in the corner (Godot not truly full screen) | ### Tested versions
Godot 4.3 form steam and Godot 4.3 stable mono form the website.
### System information
Windows 11 1920x1080 screen
### Issue description
When closing Godot I thorw my mouse in the cornor of the screen and click the close button. But the full screen of Godot is not truly fullscreen. Then the window under Godot will close, like my browser, IDE or discord.
### Steps to reproduce
* Open Godot and a other window.
* Make both full screen and have Godot on top.
* Put you mouse in the most far corner of the close button.
* If not working, go to the most right of the screen and then up to the corner.
* Then close by accident the other window instead of Godot.
Here you can also see that discord is bigger when full screen then Godot. Look at the left of the server highlights.


### Minimal reproduction project (MRP)
N/A | discussion,needs testing,topic:gui | low | Minor |
2,573,016,042 | godot | "Theme Override" for "Font" does not exist for AcceptDialog-node. | ### Tested versions
N/A
### System information
Godot v4.3.stable - Windows 10.0.18363 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 970 (NVIDIA; 32.0.15.5599) - Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz (8 Threads)
### Issue description
The AcceptDialog node does not provide an option to override the font theme for its text labels; it only works for subcomponents that support the override feature.
Iโm not sure if this limitation can be bypassed by using a parent node that allows some kind of overriding, but I havenโt found a solution that works. If not, I could open it as a Godot Improvement Proposal.
Example (title and OK-button font can't be changed):

### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
[mrp_issue.zip](https://github.com/user-attachments/files/17292813/mrp_issue.zip)
| enhancement,discussion,topic:gui | low | Minor |
2,573,067,854 | PowerToys | Microsoft Mouse without Bordes | ### Description of the new feature / enhancement
Screenshot control V + Control C through different computers
### Scenario when this would be used?
Two diferents computers and the same net connected by Microsoft Mouse without Bordes, and the user copy an image (screenshot) from computer 1 and past in computer 2
### Supporting information
control c plus control V for characters works fine, but not for screenshots. | Needs-Triage | low | Minor |
2,573,114,645 | go | net: TestDualStackTCPListener failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestDualStackTCPListener"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8734647041618517201)):
=== RUN TestDualStackTCPListener
listen_test.go:273: listen tcp :49159: bind: address already in use
listen_test.go:273: listen tcp :49164: bind: address already in use
--- FAIL: TestDualStackTCPListener (0.01s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,573,116,650 | godot | Error ( !is_inside_tree() ) when opening a scene file with CPUParticles3D | ### Tested versions
- Reproducible in 4.3 official (and debug)
### System information
Windows 10
### Issue description
I started getting log spam.
My scene contains another subscene with CPUParticleSystem as a root. It is enabled and it is in the global space. It only occurs if I have set "editable children" and it occurs during both the instantiate() call of the scene and editor load of the scene.
I traced it using the debug editor and it seems that instantiation sets the particle system `enable `to `true `before the node has settled in the tree, causing an update to run and in turn causing the global space call to fail. This is a hypothesis.
The corresponding stack trace:
```
godot.windows.editor.x86_64.exe!Node3D::get_global_transform() Line 345
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\scene\3d\node_3d.cpp(345)
godot.windows.editor.x86_64.exe!CPUParticles3D::_particles_process(double p_delta) Line 679
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\scene\3d\cpu_particles_3d.cpp(679)
godot.windows.editor.x86_64.exe!CPUParticles3D::_update_internal() Line 653
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\scene\3d\cpu_particles_3d.cpp(653)
[Inline Frame] godot.windows.editor.x86_64.exe!call_with_variant_args_helper(CPUParticles3D *) Line 304
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\core\variant\binder_common.h(304)
[Inline Frame] godot.windows.editor.x86_64.exe!call_with_variant_args_dv(CPUParticles3D *) Line 451
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\core\variant\binder_common.h(451)
godot.windows.editor.x86_64.exe!MethodBindT<CPUParticles3D,bool>::call(Object * p_object, const Variant * * p_args, int p_arg_count, Callable::CallError & r_error) Line 343
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\core\object\method_bind.h(343)
godot.windows.editor.x86_64.exe!ClassDB::set_property(Object * p_object, const StringName & p_property, const Variant & p_value, bool * r_valid) Line 1516
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\core\object\class_db.cpp(1516)
godot.windows.editor.x86_64.exe!Object::set(const StringName & p_name, const Variant & p_value, bool * r_valid) Line 249
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\core\object\object.cpp(249)
godot.windows.editor.x86_64.exe!SceneState::instantiate(SceneState::GenEditState p_edit_state) Line 420
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\scene\resources\packed_scene.cpp(420)
godot.windows.editor.x86_64.exe!PackedScene::instantiate(PackedScene::GenEditState p_edit_state) Line 2093
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\scene\resources\packed_scene.cpp(2093)
godot.windows.editor.x86_64.exe!EditorNode::load_scene(const String & p_scene, bool p_ignore_broken_deps, bool p_set_inherited, bool p_clear_errors, bool p_force_open_imported, bool p_silent_change_tab) Line 4080
at C:\Users\hugol\AppData\Local\Temp\godot-debug-build-editor\editor\editor_node.cpp(4080)
```
Note that the "set" is the `enable` to `true` and likely we should skip _update_internal() if we're not in the tree:
https://github.com/godotengine/godot/blob/77dcf97d82cbfe4e4615475fa52ca03da645dbd8/scene/3d/cpu_particles_3d.cpp#L679
Possibly related issue on Godot 3.x
https://github.com/godotengine/godot/issues/47020
### Steps to reproduce
1. open the attached project. navigate to assets/rocket/rockets/rocket_t4_cyan.tscn
2. Observe the error message in the output
Note that I just copied the relevant bits of the project that exhibit the problem since I am unable to produce a file that has it manually. It might be some .tscn file corruption if the above is the intended behavior.
### Minimal reproduction project (MRP)
[cpuparticleproblem.zip](https://github.com/user-attachments/files/17293485/cpuparticleproblem.zip)
| bug,topic:editor,needs testing,topic:particles | low | Critical |
2,573,164,956 | rust | Decide on name for `Freeze` | We still need to pick a name for `Freeze` (which may still be `Freeze`) so that we can proceed with:
- https://github.com/rust-lang/rfcs/pull/3633
Thoughts?
cc @rust-lang/lang
@rustbot labels +T-lang +I-lang-nominated
| T-lang,proposed-final-comment-period,disposition-merge,C-discussion,I-lang-radar,I-lang-bikeshed | medium | Major |
2,573,201,249 | godot | AnimationNode.blend_input time parameter is ignored when not seeking. | ### Tested versions
- Reproducible in `4.3.dev6`, `4.3.beta1`, `4.3.beta3`, `4.3.rc1`, `4.3.stable`, `4.4.dev3`
- Even more broken in `4.1.4**`, `4.2.1.stable**`, `4.2.2**`
- Not reproducible in `4.2.stable`, `4.3-dev4`, `4.3-dev5`
- Not reproducible in `4.1.3.stable*`, `4.1.4-rc1*`, `4.1.4-rc2*`, `4.1.4-rc3*`
`*`: Mesh is broken in this version.
`**`: This version is even more broken, animation reverses when not seeking. Mesh is also broken
### System information
Godot v4.3.stable - Ubuntu 24.04.1 LTS 24.04 - X11 - Vulkan (Forward+)
### Issue description
AnimationNode.blend_input has a time parameter which advance the connected input node when passing seek == true.
When connected to an AnimationNodeAnimation, passing seek == false, the time parameter is ignored, the connected node not only does not advance, but the time parameter has no effect.
https://github.com/user-attachments/assets/ba88b9b3-cc3a-42c1-9396-f1cb8a32b13c
### Steps to reproduce
In the MRP:
- Open Main.tscn - Select the AnimationTree - Tab into the AnimationTree editor - Select AnimationNodePass
- In the inspector, uncheck `force_seek`, and modify `additional_blend_input_time` to observe that the `time` parameter has no effect.
### Minimal reproduction project (MRP)
[mrp-non-seeking-blend_input.zip](https://github.com/user-attachments/files/17293916/mrp-non-seeking-blend_input.zip)
| enhancement,discussion,topic:animation | low | Critical |
2,573,225,193 | yt-dlp | [Hotstar] HTTP Error 503: Service Unavailable with cookies | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Earlier I could download videos from hotstar if I use login cookie but recently after yt-dlp update now I can't download video even using login cookie. I was also able to download yesterday on the same system.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
Command-line config: ['-c', '-o', '\\\\VBOXSVR\\v2\\Hotstar\\ishti-kutum\\video\\0142-%(title)s.%(ext)s', '-v', '--cookies', 'cookies.txt', '-N', '50', '-S', 'vcodec:h264,res:480,ext:mp4:m4a', 'https://www.hotstar.com/in/shows/ishti-kutum/1271269632/ignore_me/1000288197']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.10.07 from yt-dlp/yt-dlp [1a176d874] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-92795-gcdbf8847ea, ffprobe 2024-01-01-git-e1c1dc8347-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
ERROR: [hotstar] 1000288197: Unable to download JSON metadata: HTTP Error 503: Service Unavailable (caused by <HTTPError 503: Service Unavailable>)
File "yt_dlp\extractor\common.py", line 741, in extract
File "yt_dlp\extractor\hotstar.py", line 250, in _real_extract
File "yt_dlp\extractor\hotstar.py", line 65, in _call_api_v2
File "yt_dlp\extractor\hotstar.py", line 50, in _call_api_impl
File "yt_dlp\extractor\common.py", line 1151, in download_content
File "yt_dlp\extractor\common.py", line 1111, in download_handle
File "yt_dlp\extractor\common.py", line 961, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 910, in _request_webpage
File "yt_dlp\extractor\common.py", line 897, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4172, in urlopen
File "yt_dlp\networking\common.py", line 117, in send
File "yt_dlp\networking\_helper.py", line 208, in wrapper
File "yt_dlp\networking\common.py", line 340, in send
File "yt_dlp\networking\_requests.py", line 365, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 503: Service Unavailable
```
| account-needed,geo-blocked,site-bug,triage | medium | Critical |
2,573,248,826 | excalidraw | When selecting a rectangle, ellipse, diamond, image, linear or arrow tool, clicking on the canvas should create a default sided shape | Create a 100x100 shape on pointer up in the canvas if the pointer was not dragged if a rectangle, ellipse, diamond, image, linear or arrow tool is selected. | enhancement | low | Major |
2,573,320,834 | deno | `deno jupyter` doesn't support `Variables` table in VSCode | <img width="827" alt="Screenshot 2024-10-08 at 16 01 53" src="https://github.com/user-attachments/assets/bc652848-9e18-4786-be64-766a58bd9070">
| feat,deno jupyter | low | Minor |
2,573,329,301 | godot | `GPUParticles3D` is_emitting is incorrectly toggled | ### Tested versions
- Reproducible in 4.4 dev3
### System information
Godot v4.4.dev (0a9ad8f9d) - Artix Linux #1 SMP PREEMPT_DYNAMIC Wed, 02 Oct 2024 15:03:06 +0000 on Tty - X11 display driver, Multi-window, 1 monitor - OpenGL 3 (Compatibility) - NVIDIA GeForce GTX 1050 Ti (nvidia; 560.35.03) - AMD Ryzen 5 2600 Six-Core Processor (12 threads)
### Issue description
I have an `GPUParticles3D` with the property `one-shot` enabled. Let's say 2 seconds lifetime. And there is the scene of the `GPUParticles3D` itself and a main scene where it is placed multiple times.
The problem is that if the emitting stops in 2 seconds in its own scene, it also stops in the scenes this is located in. Whatever, its a chore but not a big issue because I can manually start it on `_enter_tree()`. And there is a [proposal opened for an autoplay](https://github.com/godotengine/godot-proposals/issues/10936)
But the real bug is that if it is emitting in its own scene, and I close that scene, then start the main scene, it emits (as expected) but once the emitting ends for all particles in the main scene, it also ends in its own scene (dirty flag?)
And if you then open the particle scene itself, it has `is_emitting` disabled/false, but it IS emitting in the editor.
### Steps to reproduce
1. Create a `GPUParticles3D` with one-shot property, lifetime 10 seconds, and save it as its own scene.
2. Place the above scene in your main scene, any amount of times.
3. On the particle scene itself, enable `is_emitting` and close the scene.
4. Start/Play the main scene
5. Await for lifetime to end
6. Close the game, and open the `GPUParticles3D` scene
Or just open the MRP below, open `smoke_scene.tscn`, enable `is_emitting` and reduce the lifetime (so you dont wait 60 seconds), close the scene, run the main scene, await lifetime to end, then open `smoke_scene.tscn`
### Minimal reproduction project (MRP)
[smoke-grenade-room.zip](https://github.com/user-attachments/files/17277806/smoke-grenade-room.zip)
| bug,topic:editor,needs testing,topic:particles | low | Critical |
2,573,346,413 | pytorch | Add indication in symbols or FakeTensor whether ranges is set via mark_dynamic | ### ๐ The feature, motivation and pitch
Currently when a tensor is marked dynamic via mark_dynamic API. The range is reflected in the graph via ShapeEnv. However, no indication is present whether it is a user-provided range or a default range. The implication of this being hardship to differentiate user provided ranges to default ranges [2, INT_MAX]. Different backends have different implementation of dynamic ranges and it would be beneficial if a distinction is available between user set ranges and default ranges. For instance, a backend may choose to use the user ranges and discard or manipulate default ranges.
The tensor marked dynamic has an attribute set "_dynamo_dynamic_range" which can be checked for confirming if tensor range is set by user through mark_dynamic API. However this information is available during launch(__call__) and not __init__(although graph module has ranges in shapeEnv). The attribute however has no relation to symbols being used for that tensor and cannot be easily correlated to other input using same symbol.
The ask here is to set an indication in Symbol whether the range is set externally or internally.
### Alternatives
If information can be queried on a symbol that its range is set by the mark_dynamic API or is default, the backends can implement their logic effeciently.
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @bobrenjc93 | triaged,enhancement,oncall: pt2,module: dynamic shapes | low | Major |
2,573,405,762 | ollama | Raw mode in `/api/generate` should return eos tokens | ### What is the issue?
Currently setting `"raw": true` does not return end of sequence tokens such as EOS, EOM, EOT, etc.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug,api | low | Minor |
2,573,407,724 | three.js | Add Flag to Disable Rendering Object in Transmission Pass | ### Description
We render an object with the `MeshPhysicalMaterial` with `transmission` set to `1`. We allow users to apply clipping planes to that object. Because we want to render the "inside" of the object, we set the `side` to `DoubleSide`. However, we also want to render the object with a specific color.
Displaying the object with a certain color is tricky with this configuration though, because when `side` is set to `DoubleSide` the object will be rendered in the transmission pass with `side` set to `BackSide` and consequently transmit through itself, as it were. This is often desirable, I guess, but in our case it will distort the color in a way we do not want. We want to use double side rendering just to show the backsides of the object when clipping is applied.
Rendered with `FrontSide`:
<img alt="transmission_issue_w_front-side" src="https://github.com/user-attachments/assets/36a80bf2-86a3-472d-acd8-f5d504e24fc4" width="255">
Rendered with `DoubleSide`:
<img alt="transmission_issue_w_double-side" src="https://github.com/user-attachments/assets/5d89a006-2d86-4e6e-b9a3-f99b5b969b26" width="255">
You can also see the effect here: https://codesandbox.io/p/sandbox/2jpp8k (Modify line 32 to switch between front side and double side rendering)
An entry to the relevant code can be found here: https://github.com/mrdoob/three.js/blob/dev/src/renderers/WebGLRenderer.js#L1542
### Solution
Introduce a local (material-level) or a global flag to control whether objects with `transmission !== 0` and `side === DoubleSide` will be rendered in the transmission pass. The current behavior could be preserved by setting the flag to `true` by default.
If it is agreed upon that this is a desirable feature, I'd be willing to contribute a PR.
### Alternatives
We can work around the problem by setting the `drawRange.count` to 0 via the `onBeforeRender` for just the transmission pass, effectively hiding the object. This, however, has the shortcoming of still issuing a draw call during the transmission pass for the object.
### Additional context
_No response_ | Suggestion | low | Minor |
2,573,427,923 | excalidraw | Text outside of shape when rotating and using text align effects | https://github.com/user-attachments/assets/355a1462-6084-47a4-ad4c-9dea8465b3df
| bug | low | Major |
2,573,438,737 | PowerToys | New+ freezing on Contex Menu (by Keyboard Key) | ### Microsoft PowerToys version
0.85.1
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
New+
### Steps to reproduce
In some situations when I access the Context Menu via keyboard, as my keyboard has a key to activate this menu, the focus is not available for navigation via the keyboard (up arrow and down arrow), that is, the menu opens but remains When stuck, I can only use it when I position the mouse on it so that the focus is reestablished and with the mouse I can finish it, but with the keyboard, I can't.
I think the trigger is when you open the window and activate the menu without clicking on the container (where the files and folders are located).
### โ๏ธ Expected Behavior
Opening the Context Menu with focus for use on the keyboard as we have in Windows (as it always worked)
### โ Actual Behavior
Unwanted crash bypassed only by mouse access when the menu is open!
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,573,444,374 | angular | Vite dev server crashes when HttpClient receives non-200 code | ### Command
serve
### Is this a regression?
- [x] Yes, this behavior used to work in the previous version
### The previous version in which this bug was not present was
v17
### Description
Basically, it's a duplicate of https://github.com/angular/angular-cli/issues/26192
But now it's here again with v19, zoneless and outputMode: 'server'.
Is this behaviour expected and not-an-issue on new setup?
### Minimal Reproduction
ng serve
### Exception or Error
_No response_
### Your Environment
HttpErrorResponse
### Anything else relevant?
_No response_ | area: server,core: zoneless | low | Critical |
2,573,472,075 | ollama | use the macOS electron app for Windows and Linux | I don't understand why the electron app is only for macOS when electron is perfectly capable of running on Windows and Linux.
features like #7097 can easily be adopted for all platforms if electron is used. | feature request | low | Minor |
2,573,477,188 | rust | ICE: `invalid immediate for given destination place: scalar value has wrong size` | <!--
[31mICE[0m: Rustc ./a.rs '' 'thread 'rustc' panicked at compiler/rustc_const_eval/src/interpret/operand.rs:119:17: 'assertion `left == right` failed: invalid immediate for given destination place: scalar value has wrong size'', 'thread 'rustc' panicked at compiler/rustc_const_eval/src/interpret/operand.rs:119:17: 'assertion `left == right` failed: invalid immediate for given destination place: scalar value has wrong size''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
trait Owner {
const C<const N: u32>: u32 = N;
}
fn take0<const N: u64>(_: impl Owner<C<N> = { N }>) {}
fn main() {
take0::<128>(());
}
impl Owner for () {
;
}
````
original:
````rust
trait Owner {
const C<const N: u32>: u32 = N;
}
fn take0<const N: u64>(_: impl Owner<C<N> = { N }>) {}
fn main() {
take0::<128>(());
}
impl Owner for () {
const C<const N: u32>: u32 = N;
}
fn take0<const N: u64>(_: impl <128>) {}
fn main(_: impl Owner<C<N> = { N }>) {
take0::<128>(());
}
````
Version information
````
rustc 1.83.0-nightly (6a3c45e1c 2024-10-08)
binary: rustc
commit-hash: 6a3c45e1c65e61b298fd6eaceac6d8ef4d973b66
commit-date: 2024-10-08
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.1
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Program output</strong></summary>
<p>
```
error: non-item in item list
--> /tmp/icemaker_global_tempdir.2w6assLgpoXM/rustc_testrunner_tmpdir_reporting.CKpee9Fi7WqK/mvce.rs:12:5
|
11 | impl Owner for () {
| - item list starts here
12 | ;
| ^ non-item starts here
13 | }
| - item list ends here
error[E0658]: associated const equality is incomplete
--> /tmp/icemaker_global_tempdir.2w6assLgpoXM/rustc_testrunner_tmpdir_reporting.CKpee9Fi7WqK/mvce.rs:5:38
|
5 | fn take0<const N: u64>(_: impl Owner<C<N> = { N }>) {}
| ^^^^^^^^^^^^
|
= note: see issue #92827 <https://github.com/rust-lang/rust/issues/92827> for more information
= help: add `#![feature(associated_const_equality)]` to the crate attributes to enable
= note: this compiler was built on 2024-10-08; consider upgrading it if it is out of date
error[E0658]: generic const items are experimental
--> /tmp/icemaker_global_tempdir.2w6assLgpoXM/rustc_testrunner_tmpdir_reporting.CKpee9Fi7WqK/mvce.rs:2:12
|
2 | const C<const N: u32>: u32 = N;
| ^^^^^^^^^^^^^^
|
= note: see issue #113521 <https://github.com/rust-lang/rust/issues/113521> for more information
= help: add `#![feature(generic_const_items)]` to the crate attributes to enable
= note: this compiler was built on 2024-10-08; consider upgrading it if it is out of date
error: the constant `N` is not of type `u32`
--> /tmp/icemaker_global_tempdir.2w6assLgpoXM/rustc_testrunner_tmpdir_reporting.CKpee9Fi7WqK/mvce.rs:5:38
|
5 | fn take0<const N: u64>(_: impl Owner<C<N> = { N }>) {}
| ^^^^^^^^^^^^ expected `u32`, found `u64`
|
note: required by a const generic parameter in `Owner::C`
--> /tmp/icemaker_global_tempdir.2w6assLgpoXM/rustc_testrunner_tmpdir_reporting.CKpee9Fi7WqK/mvce.rs:2:13
|
2 | const C<const N: u32>: u32 = N;
| ^^^^^^^^^^^^ required by this const generic parameter in `Owner::C`
thread 'rustc' panicked at compiler/rustc_const_eval/src/interpret/operand.rs:119:17:
assertion `left == right` failed: invalid immediate for given destination place: scalar value has wrong size
left: Size(8 bytes)
right: Size(4 bytes)
stack backtrace:
0: 0x7e41bbb3791a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h3a851883c61e7981
1: 0x7e41bc203466 - core::fmt::write::h71e264ea650e601d
2: 0x7e41bd422b51 - std::io::Write::write_fmt::h627c59d250fe240c
3: 0x7e41bbb37772 - std::sys::backtrace::BacktraceLock::print::h44906e04748e3e02
4: 0x7e41bbb39c46 - std::panicking::default_hook::{{closure}}::he278d98c128c4a9d
5: 0x7e41bbb39a90 - std::panicking::default_hook::h9cfc75667ebedffd
6: 0x7e41babea8df - std[584ae1ac58580d89]::panicking::update_hook::<alloc[ea383159db73a253]::boxed::Box<rustc_driver_impl[6a0c09598ae9c08a]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7e41bbb3a358 - std::panicking::rust_panic_with_hook::hb2fe5d026efb466f
8: 0x7e41bbb3a12a - std::panicking::begin_panic_handler::{{closure}}::h559274bc7def08cd
9: 0x7e41bbb37dc9 - std::sys::backtrace::__rust_end_short_backtrace::hd9444e85474e39c0
10: 0x7e41bbb39dec - rust_begin_unwind
11: 0x7e41b94dbe70 - core::panicking::panic_fmt::h7d898ac73934d0c0
12: 0x7e41ba5ed256 - core::panicking::assert_failed_inner::h3fd5e0c9c8a24468
13: 0x7e41bab45cde - core[12ec0f185bbc53d2]::panicking::assert_failed::<rustc_abi[81e5ba8ae6f0eb2a]::Size, rustc_abi[81e5ba8ae6f0eb2a]::Size>
14: 0x7e41bc998f01 - <rustc_const_eval[bcbc4c0dac6d4403]::interpret::eval_context::InterpCx<rustc_const_eval[bcbc4c0dac6d4403]::const_eval::machine::CompileTimeMachine>>::write_immediate_no_validate::<rustc_const_eval[bcbc4c0dac6d4403]::interpret::place::MPlaceTy>
15: 0x7e41bc992a14 - <rustc_const_eval[bcbc4c0dac6d4403]::interpret::eval_context::InterpCx<rustc_const_eval[bcbc4c0dac6d4403]::const_eval::machine::CompileTimeMachine>>::return_from_current_stack_frame
16: 0x7e41b9bdd4c9 - rustc_const_eval[bcbc4c0dac6d4403]::const_eval::eval_queries::eval_to_allocation_raw_provider
17: 0x7e41bc9a1236 - rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::eval_to_allocation_raw::dynamic_query::{closure#2}::{closure#0}, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 24usize]>>
18: 0x7e41bc9a0a5a - rustc_query_system[847c6ad2b9dd3ac1]::query::plumbing::try_execute_query::<rustc_query_impl[3daeaa2f12c0dbc5]::DynamicConfig<rustc_query_system[847c6ad2b9dd3ac1]::query::caches::DefaultCache<rustc_middle[6c2a42b363ada80b]::ty::ParamEnvAnd<rustc_middle[6c2a42b363ada80b]::mir::interpret::GlobalId>, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::QueryCtxt, false>
19: 0x7e41bc9a062f - rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::eval_to_allocation_raw::get_query_non_incr::__rust_end_short_backtrace
20: 0x7e41bc9ba6ab - rustc_const_eval[bcbc4c0dac6d4403]::const_eval::valtrees::eval_to_valtree
21: 0x7e41bc9ba4bf - <rustc_const_eval[bcbc4c0dac6d4403]::provide::{closure#0} as core[12ec0f185bbc53d2]::ops::function::FnOnce<(rustc_middle[6c2a42b363ada80b]::ty::context::TyCtxt, rustc_middle[6c2a42b363ada80b]::ty::ParamEnvAnd<rustc_middle[6c2a42b363ada80b]::mir::interpret::GlobalId>)>>::call_once
22: 0x7e41bc9ba476 - rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::eval_to_valtree::dynamic_query::{closure#2}::{closure#0}, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 24usize]>>
23: 0x7e41bc9ba435 - <rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::eval_to_valtree::dynamic_query::{closure#2} as core[12ec0f185bbc53d2]::ops::function::FnOnce<(rustc_middle[6c2a42b363ada80b]::ty::context::TyCtxt, rustc_middle[6c2a42b363ada80b]::ty::ParamEnvAnd<rustc_middle[6c2a42b363ada80b]::mir::interpret::GlobalId>)>>::call_once
24: 0x7e41bc9a0b2e - rustc_query_system[847c6ad2b9dd3ac1]::query::plumbing::try_execute_query::<rustc_query_impl[3daeaa2f12c0dbc5]::DynamicConfig<rustc_query_system[847c6ad2b9dd3ac1]::query::caches::DefaultCache<rustc_middle[6c2a42b363ada80b]::ty::ParamEnvAnd<rustc_middle[6c2a42b363ada80b]::mir::interpret::GlobalId>, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::QueryCtxt, false>
25: 0x7e41bc9a044a - rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::eval_to_valtree::get_query_non_incr::__rust_end_short_backtrace
26: 0x7e41bcc18437 - rustc_middle[6c2a42b363ada80b]::query::plumbing::query_get_at::<rustc_query_system[847c6ad2b9dd3ac1]::query::caches::DefaultCache<rustc_middle[6c2a42b363ada80b]::ty::ParamEnvAnd<rustc_middle[6c2a42b363ada80b]::mir::interpret::GlobalId>, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 24usize]>>>
27: 0x7e41bcc17eaa - <rustc_middle[6c2a42b363ada80b]::ty::context::TyCtxt>::const_eval_global_id_for_typeck
28: 0x7e41bcc16dbe - <rustc_middle[6c2a42b363ada80b]::ty::context::TyCtxt>::const_eval_resolve_for_typeck
29: 0x7e41bcc16a05 - <rustc_middle[6c2a42b363ada80b]::ty::consts::Const>::normalize
30: 0x7e41bcab6d26 - rustc_trait_selection[1c2265ed8fe6d14c]::traits::project::opt_normalize_projection_term
31: 0x7e41bca9cfdc - rustc_trait_selection[1c2265ed8fe6d14c]::traits::project::poly_project_and_unify_term
32: 0x7e41bc2662a8 - <rustc_trait_selection[1c2265ed8fe6d14c]::traits::select::SelectionContext>::evaluate_root_obligation
33: 0x7e41bc264292 - rustc_traits[e9b239c6de819adf]::evaluate_obligation::evaluate_obligation
34: 0x7e41bc263d29 - rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::evaluate_obligation::dynamic_query::{closure#2}::{closure#0}, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 2usize]>>
35: 0x7e41bc2632d3 - rustc_query_system[847c6ad2b9dd3ac1]::query::plumbing::try_execute_query::<rustc_query_impl[3daeaa2f12c0dbc5]::DynamicConfig<rustc_query_system[847c6ad2b9dd3ac1]::query::caches::DefaultCache<rustc_type_ir[c34a23cd41056f95]::canonical::Canonical<rustc_middle[6c2a42b363ada80b]::ty::context::TyCtxt, rustc_middle[6c2a42b363ada80b]::ty::ParamEnvAnd<rustc_middle[6c2a42b363ada80b]::ty::predicate::Predicate>>, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 2usize]>>, false, false, false>, rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::QueryCtxt, false>
36: 0x7e41bc262f24 - rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::evaluate_obligation::get_query_non_incr::__rust_end_short_backtrace
37: 0x7e41b8a3c01e - <rustc_infer[a418d8929ad58d0a]::infer::InferCtxt as rustc_trait_selection[1c2265ed8fe6d14c]::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation_no_overflow
38: 0x7e41b88c32b9 - <rustc_trait_selection[1c2265ed8fe6d14c]::traits::fulfill::FulfillProcessor>::process_projection_obligation
39: 0x7e41b88ab544 - <rustc_trait_selection[1c2265ed8fe6d14c]::traits::fulfill::FulfillProcessor as rustc_data_structures[71eb2d5d27abd3bf]::obligation_forest::ObligationProcessor>::process_obligation
40: 0x7e41bc20f112 - <rustc_data_structures[71eb2d5d27abd3bf]::obligation_forest::ObligationForest<rustc_trait_selection[1c2265ed8fe6d14c]::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection[1c2265ed8fe6d14c]::traits::fulfill::FulfillProcessor>
41: 0x7e41b89d1939 - <rustc_hir_typeck[5f24e1131155eb7]::fn_ctxt::FnCtxt>::confirm_builtin_call
42: 0x7e41bcea85cf - <rustc_hir_typeck[5f24e1131155eb7]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
43: 0x7e41bcea1a3e - <rustc_hir_typeck[5f24e1131155eb7]::fn_ctxt::FnCtxt>::check_block_with_expected
44: 0x7e41bcea92e9 - <rustc_hir_typeck[5f24e1131155eb7]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
45: 0x7e41bc549ae1 - rustc_hir_typeck[5f24e1131155eb7]::check::check_fn
46: 0x7e41bc53e861 - rustc_hir_typeck[5f24e1131155eb7]::typeck
47: 0x7e41bc53e1cf - rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 8usize]>>
48: 0x7e41bc64dfba - rustc_query_system[847c6ad2b9dd3ac1]::query::plumbing::try_execute_query::<rustc_query_impl[3daeaa2f12c0dbc5]::DynamicConfig<rustc_query_system[847c6ad2b9dd3ac1]::query::caches::VecCache<rustc_span[ff2bb122df6010f7]::def_id::LocalDefId, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::QueryCtxt, false>
49: 0x7e41bc64cd1b - rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
50: 0x7e41bc64c9a1 - <rustc_middle[6c2a42b363ada80b]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[944f64ea10dfd453]::check_crate::{closure#4}>::{closure#0}
51: 0x7e41bc64a8db - rustc_hir_analysis[944f64ea10dfd453]::check_crate
52: 0x7e41bc647317 - rustc_interface[6c180d597a38bbca]::passes::run_required_analyses
53: 0x7e41bcf4b41e - rustc_interface[6c180d597a38bbca]::passes::analysis
54: 0x7e41bcf4b3f1 - rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 1usize]>>
55: 0x7e41bd1041ee - rustc_query_system[847c6ad2b9dd3ac1]::query::plumbing::try_execute_query::<rustc_query_impl[3daeaa2f12c0dbc5]::DynamicConfig<rustc_query_system[847c6ad2b9dd3ac1]::query::caches::SingleCache<rustc_middle[6c2a42b363ada80b]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[3daeaa2f12c0dbc5]::plumbing::QueryCtxt, false>
56: 0x7e41bd103ecf - rustc_query_impl[3daeaa2f12c0dbc5]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
57: 0x7e41bcf610de - rustc_interface[6c180d597a38bbca]::interface::run_compiler::<core[12ec0f185bbc53d2]::result::Result<(), rustc_span[ff2bb122df6010f7]::ErrorGuaranteed>, rustc_driver_impl[6a0c09598ae9c08a]::run_compiler::{closure#0}>::{closure#1}
58: 0x7e41bd00b790 - std[584ae1ac58580d89]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[6c180d597a38bbca]::util::run_in_thread_with_globals<rustc_interface[6c180d597a38bbca]::util::run_in_thread_pool_with_globals<rustc_interface[6c180d597a38bbca]::interface::run_compiler<core[12ec0f185bbc53d2]::result::Result<(), rustc_span[ff2bb122df6010f7]::ErrorGuaranteed>, rustc_driver_impl[6a0c09598ae9c08a]::run_compiler::{closure#0}>::{closure#1}, core[12ec0f185bbc53d2]::result::Result<(), rustc_span[ff2bb122df6010f7]::ErrorGuaranteed>>::{closure#0}, core[12ec0f185bbc53d2]::result::Result<(), rustc_span[ff2bb122df6010f7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[12ec0f185bbc53d2]::result::Result<(), rustc_span[ff2bb122df6010f7]::ErrorGuaranteed>>
59: 0x7e41bd00be57 - <<std[584ae1ac58580d89]::thread::Builder>::spawn_unchecked_<rustc_interface[6c180d597a38bbca]::util::run_in_thread_with_globals<rustc_interface[6c180d597a38bbca]::util::run_in_thread_pool_with_globals<rustc_interface[6c180d597a38bbca]::interface::run_compiler<core[12ec0f185bbc53d2]::result::Result<(), rustc_span[ff2bb122df6010f7]::ErrorGuaranteed>, rustc_driver_impl[6a0c09598ae9c08a]::run_compiler::{closure#0}>::{closure#1}, core[12ec0f185bbc53d2]::result::Result<(), rustc_span[ff2bb122df6010f7]::ErrorGuaranteed>>::{closure#0}, core[12ec0f185bbc53d2]::result::Result<(), rustc_span[ff2bb122df6010f7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[12ec0f185bbc53d2]::result::Result<(), rustc_span[ff2bb122df6010f7]::ErrorGuaranteed>>::{closure#1} as core[12ec0f185bbc53d2]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
60: 0x7e41bd00cd41 - std::sys::pal::unix::thread::Thread::new::thread_start::h54d489bd9073b86f
61: 0x7e41be75139d - <unknown>
62: 0x7e41be7d649c - <unknown>
63: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.83.0-nightly (6a3c45e1c 2024-10-08) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [eval_to_allocation_raw] const-evaluating + checking `Owner::C`
#1 [eval_to_valtree] evaluating type-level constant
end of query stack
error: aborting due to 4 previous errors
For more information about this error, try `rustc --explain E0658`.
```
</p>
</details>
<!--
query stack:
#0 [eval_to_allocation_raw] const-evaluating + checking `Owner::C`
#1 [eval_to_valtree] evaluating type-level constant
-->
| I-ICE,T-compiler,C-bug,requires-nightly,S-bug-has-test,F-associated_const_equality,F-generic_const_items | low | Critical |
2,573,480,671 | flutter | Compact Visual Density is wrongfully applied to Checkboxes with `MaterialTapTargetSize.padded` on desktop platforms according to Material 3 Guidelines | ### Steps to reproduce
Create a simple widget tree with a checkbox that has a `materialTapTargetSize: MaterialTapTargetSize.padded`.
Run it on web.
Inspect its size.
### Expected results
According to M3 guidelines, even if visual density is compact:
```
Itโs important to keep accessibility in mind when youโre applying density to your UI. No matter the density, all touch targets should be at least `48px` in size.
```
https://m3.material.io/blog/material-density-web
So a Checkbox should always have a 48x48 touch target. Which does not happen if we use the default VisualDensity for desktop platforms.
### Actual results
Because for desktop platforms there's a compact Visual Density being applied by default, it's shrinking components' touch targets like the Checkbox to sizes that are not accessible according to M3 (from 48 to 40 for example)
Additionally, this compact visual density that is applied by default is assuming that all UI on a desktop app (or desktop web app) are going to be very visually dense, when it's not always the case.
Is this an implementation mistake by the Flutter team? Are the M3 docs outdated?
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Checkbox(
materialTapTargetSize: MaterialTapTargetSize.padded,
value: true,
onChanged: (_) {},
),
),
);
}
}
```
</details>
### Screenshots or Video
Checkbox size without overriding default Visual Density (which is `VisualDensity.compact`):

Checkbox size by overriding default Visual Density with `VisualDensity.standard`:

### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
~/repositories/checkbox_visual_density$ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel [user-branch], 3.22.0, on macOS 14.6.1 23G93 darwin-arm64, locale en-PT)
! Flutter version 3.22.0 on channel [user-branch] at /opt/homebrew/Caskroom/flutter/3.0.4/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/docs/get-started/install.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/docs/get-started/install/macos#android-setup for more details.
[!] Xcode - develop for iOS and macOS (Xcode 16.0)
! iOS 18.0 Simulator not installed; this may be necessary for iOS and macOS development.
To download and install the platform, open Xcode, select Xcode > Settings > Platforms,
and click the GET button for the required platform.
For more information, please visit:
https://developer.apple.com/documentation/xcode/installing-additional-simulator-runtimes
[โ] Chrome - develop for the web
[โ] Android Studio (version 2021.2)
[โ] IntelliJ IDEA Ultimate Edition (version 2022.1.3)
[โ] VS Code (version 1.92.2)
[โ] Connected device (5 available)
[โ] Network resources
! Doctor found issues in 3 categories.
```
</details>
| framework,f: material design,a: desktop,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.26 | low | Critical |
2,573,482,429 | tensorflow | fatal error: 'NEON_2_SSE.h' file not found - macOS x86_64 build tensorflowlite_c library | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
macOS
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
AppleClang 15.0.0.15000309
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When trying to build the tensorflowlite_c lib according to [this](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/guide/build_cmake.md) guide, the build fails on macOS x86_64 plattform with the following error code:
```output
In file included from /Users/runner/work/tflite-c-lib/tflite-c-lib/tensorflow/tensorflow/lite/delegates/xnnpack/quantization_util.cc:21:
In file included from /Users/runner/work/tflite-c-lib/tflite-c-lib/tensorflow/tensorflow/lite/kernels/internal/optimized/optimized_ops.h:32:
In file included from /Users/runner/work/tflite-c-lib/tflite-c-lib/tensorflow/tensorflow/lite/kernels/internal/common.h:35:
/Users/runner/work/tflite-c-lib/tflite-c-lib/tensorflow/tensorflow/lite/kernels/internal/optimized/neon_check.h:25:10: fatal error: 'NEON_2_SSE.h' file not found
#include "NEON_2_SSE.h" // IWYU pragma: export
^~~~~~~~~~~~~~
1 error generated.
```
### Standalone code to reproduce the issue
The error happens when I build using github workflows. [This](https://github.com/faressc/tflite-c-lib/actions/runs/11238016239/job/31241774171) is the action run.
### Relevant log output
```shell
In file included from /Users/runner/work/tflite-c-lib/tflite-c-lib/tensorflow/tensorflow/lite/delegates/xnnpack/quantization_util.cc:21:
In file included from /Users/runner/work/tflite-c-lib/tflite-c-lib/tensorflow/tensorflow/lite/kernels/internal/optimized/optimized_ops.h:32:
In file included from /Users/runner/work/tflite-c-lib/tflite-c-lib/tensorflow/tensorflow/lite/kernels/internal/common.h:35:
/Users/runner/work/tflite-c-lib/tflite-c-lib/tensorflow/tensorflow/lite/kernels/internal/optimized/neon_check.h:25:10: fatal error: 'NEON_2_SSE.h' file not found
#include "NEON_2_SSE.h" // IWYU pragma: export
^~~~~~~~~~~~~~
1 error generated.
```
| stat:awaiting tensorflower,type:build/install,comp:lite,subtype:macOS,2.17 | low | Critical |
2,573,599,383 | go | internal/trace: TestTraceStress/Stress failures | ```
#!watchflakes
default <- pkg == "internal/trace" && test == "TestTraceStress/Stress"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8734639118266472945)):
=== RUN TestTraceStress/Stress
exec.go:213: test timed out while running command: /home/swarming/.swarming/w/ir/x/w/goroot/bin/go run testdata/testprog/stress.go
trace_test.go:610: signal: killed
--- FAIL: TestTraceStress/Stress (828.23s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,573,628,107 | PowerToys | V0.85.1 does not install properly | ### Microsoft PowerToys version
V0.85.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
When I opened General -> Check for Updates it did NOT show that V0.85.1 was available.
Yet, I had received email alerts that this new version - bug fixes - was available.
You need to check that this "Check for Updates" feature works before releasing it to the public!!
### โ๏ธ Expected Behavior
Isn't this obvious to you??
When I "Check for Updates" I expect to see one of two responses:
1: Update available
2: You are already on latest release.
### โ Actual Behavior
Nothing happened!!
No responses whatsoever when I clicked on "Check for Updates"
BTW: why do you think I am reporting this?? Duh!!
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,573,633,828 | electron | [Bug]: Unfocusable panel does not respond to hover if it has a parent | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.1.2
### What operating system(s) are you using?
macOS
### Operating System Version
15.1 Beta
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
1. create a BrowserWindow with `type: "panel"`, `focusable: false`, and `parent: someOtherWindow`.
2. Hover over elements that have hover styles.
3. Hover styles should be displayed
### Actual Behavior
1. create a BrowserWindow with `type: "panel"`, `focusable: false`, and `parent: someOtherWindow`.
2. Hover over elements that have hover styles.
3. Hover styles are not displayed
### Testcase Gist URL
https://gist.github.com/e303ffd23323d240700cb12b681963e9
### Additional Information
_No response_ | platform/macOS,bug :beetle:,status/confirmed,component/BrowserWindow,has-repro-gist,32-x-y,33-x-y | low | Critical |
2,573,634,383 | vscode | Signature verification failed with 'PackageIntegrityCheckFailed' error. |
Type: <b>Bug</b>
Ever since upgrading to the most recent release of vscode I cannot install or upgrade any vscode extensions.
Cannot install ... extension because Visual Studio Code cannot verify the extension signature
Signature verification failed with 'PackageIntegrityCheckFailed' error.
VS Code version: Code 1.94.0 (Universal) (d78a74bcdfad14d5d3b1b782f87255d802b57511, 2024-10-02T13:08:12.626Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Pro (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|2, 2, 2|
|Memory (System)|32.00GB (1.83GB free)|
|Process Argv|--crash-reporter-id dfc65e10-fbb4-4b6c-919e-dce0edfa49eb|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (23)</summary>
Extension|Author (truncated)|Version
---|---|---
unique-lines|bib|1.0.0
ruff|cha|2024.50.0
gitlens|eam|15.5.1
terraform|has|2.32.3
vscode-duplicate|mrm|1.2.1
vscode-docker|ms-|1.29.3
debugpy|ms-|2024.10.0
isort|ms-|2023.10.1
python|ms-|2024.14.1
vscode-pylance|ms-|2024.9.2
datawrangler|ms-|1.10.0
jupyter|ms-|2024.8.1
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
remote-ssh|ms-|0.114.3
remote-ssh-edit|ms-|0.87.0
remote-explorer|ms-|0.4.3
vscode-thunder-client|ran|2.27.0
vscode-xml|red|0.27.1
vscode-yaml|red|1.15.0
stardog-rdf-grammars|sta|0.2.1
even-better-toml|tam|0.19.2
gistfs|vsl|0.6.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
bdiig495:31013172
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31134774
wkspc-onlycs-t:31132770
nativeloc2:31134642
wkspc-ranged-t:31151552
cf971741:31144450
defaultse:31146405
iacca2:31150323
notype1:31151523
cc771715:31146322
```
</details>
<!-- generated by issue reporter --> | bug,extension-signature | medium | Critical |
2,573,661,090 | excalidraw | Word search highlighting is flipped in RTL languages | The highlighting is in the wrong direction in RTL langs:

| bug | low | Major |
2,573,678,651 | PowerToys | Please remove the artificial smooth scrolling from the Powertoys user interface | ### Description of the new feature / enhancement
When scrolling in the PowerToys user interface, there is an annoying smooth scrolling effect that prevents fast and exact scrolling.
Power users usually hate it when smooth scrolling is forced upon them, at least if it's not configurable or can't be switched off.
### Scenario when this would be used?
Just use the type of scrolling that the user has set up in their Windows performance settings, and don't overwrite it with something you consider more appropriate. If reading the Windows settings should not be possible, just don't use smooth scrolling at all, which is the standard behavior of everything anyway.

### Supporting information
https://simonscodes.blogspot.com/2014/12/hello.html
https://github.com/simonzack/rich_edit_scroll
https://stackoverflow.com/questions/29683527/laggy-slow-mouse-wheel-scrolling-in-rich-edit-control-how-to-fix-this
https://www.pcreview.co.uk/threads/reward-how-the-can-i-disable-smooth-scrolling-in-wordpad.2817099
| Needs-Triage,Needs-Team-Response | low | Major |
2,573,688,407 | deno | Deno 2.0 RC Fails to Compile to Binary on Windows for Non-Administrator Users | Version: deno 2.0.0-rc.10
v8 12.9.202.13-rusty
typescript 5.6.2
When trying to compile a simple Deno script (console.log("deno 2.0");) into binary on Windows, non-administrator users encounter the error below that prevent successful compilation.
<pre>
deno compile --output test.exe consolelog.ts
Check file:///C:/Users/User/workspace/work2024/consolelog.ts
Compile file:///C:/Users/User/workspace/work2024/consolelog.ts to test.exe
error: Writing temporary file 'test.exe.tmp-0468b123c82c0498'
Caused by:
0: Failed reading: C:\Users\User\Application Data
1: Access is denied. (os error 5)
</pre> | needs info | low | Critical |
2,573,690,063 | deno | `deno serve` should support `onListen` callback | Currently with `Deno.serve` you're able to know on which address you're listening to with `onListen`, but `deno serve` does not implement it:
```ts
export default {
onListen:() => { /* call me maybe */ },
fetch:() => new Response()
}
```
Use-case:
- knowing which port was chosen (useful when `--port 0`)
- dynamically set `--location` to allow `fetch()` on self
- post-listening setup
- etc | feat,serve | low | Minor |
2,573,722,007 | PowerToys | New+ enhanced feature request - multiple folders at once. | ### Description of the new feature / enhancement
I have to create multiple folders

(e.g. S00, S01, S02, etc.) It would be nice to be able to use New+ to create those folders automatically. The way New+ seems to work now I'd have to put these folders inside a folder, then I'm just cutting and pasting those out of there manually (which I do already). It would be even better being able to highlight/select which folders to have New+ create. Because you may need 1 thru 4 or 0 thru 9, etc.
### Scenario when this would be used?
For things like TV shows when there are multiple seasons.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,573,756,435 | rust | High memory usage with cargo build in match statement | <!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
The memory usage gets high really quickly which causes my pc to crash.
### Code
Full git repo: https://github.com/Joshix-1/cargo-memory-leak
https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=03d0fd2c2ceeb588e5297389a01f9fa9
```Rust
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
#[repr(u8)]
pub enum FieldType {
Air,
Wood,
SandSource,
BlackHole,
SandC0, SandC1, SandC2, SandC3, SandC4, SandC5, SandC6, SandC7,
WaterC0, WaterC1, WaterC2, WaterC3, WaterC4, WaterC5, WaterC6, WaterC7,
}
macro_rules! sand { () => {
FieldType::SandC0 | FieldType::SandC1 | FieldType::SandC2
| FieldType::SandC3 | FieldType::SandC4 | FieldType::SandC5
| FieldType::SandC6 | FieldType::SandC7
}; }
macro_rules! water { () => {
FieldType::WaterC0 | FieldType::WaterC1 | FieldType::WaterC2
| FieldType::WaterC3 | FieldType::WaterC4 | FieldType::WaterC5
| FieldType::WaterC6 | FieldType::WaterC7
}; }
macro_rules! falls {() => { sand!() | water!() };}
macro_rules! not_solid {
() => { FieldType::Air | water!() };
}
impl FieldType {
pub const fn is_sand(self) -> bool { matches!(self, sand!()) }
pub const fn is_water(self) -> bool { matches!(self, water!()) }
}
fn main() {
let arr = &mut [FieldType::Air; 4];
let [ref mut a, ref mut b, ref mut c, ref mut d] = arr;
let cell: (
(&mut FieldType, &mut FieldType),
(&mut FieldType, &mut FieldType),
) = ((a, b), (c, d));
match cell {
(
(sand0 @ falls!(), sand1 @ falls!()),
(unsolid0 @ not_solid!(), unsolid1 @ not_solid!()),
// compiles without the if in the next line
) if unsolid0.is_water() != sand0.is_water() || unsolid1.is_water() != sand1.is_water() => {
if unsolid0.is_water() != sand0.is_water() {
(*sand0, *unsolid0) = (*unsolid0, *sand0);
}
if unsolid1.is_water() != sand1.is_water() {
(*sand1, *unsolid1) = (*unsolid1, *sand1);
}
}
_ => {}
}
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc --version --verbose
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
also on playground, nightly and stable
### Error output
```sh
$ env RUST_BACKTRACE=1 cargo build 2> build.err > build.out
# crash of xserver
$ du -bs build.*
49 build.err
0 build.out
$ cat build.err
Compiling sandrs v0.1.1 (/tmp/tmp.aULMeIa66O)
```
| I-crash,T-compiler,I-compilemem,C-bug,A-patterns | low | Critical |
2,573,758,211 | Python | Request to review my Pull Request which was created a week ago (#11671) | ### What would you like to share?
I create a pull request for the Implementation of Density-based spatial clustering of applications with noise (DBSCAN) ML Algorithm #11671. Its been a week and I still didn't get any approval or response in order to change the code if there is anything necessary to do so. Hence, I kindly request the maintainers to look into my PR and give your valuable feedback to do the necessary changes and then merge it to the main branch. Thanks for your time.
### Additional information
_No response_ | awaiting triage | medium | Minor |
2,573,776,637 | kubernetes | Replace admission poliy EscalationAllowed use with more targeted operation | For ValidatingAdmissionPolicy and MutatingAdmissionPolicy, we misuse `ExcalationAllowed` slightly to perform what is logically a `isSystemAdmin` check. We should make it more explicit.
xref: https://github.com/kubernetes/kubernetes/pull/127134#pullrequestreview-2351878776 | sig/api-machinery,sig/auth,triage/accepted | low | Major |
2,573,783,219 | godot | Editor Window width can't be resized down when using a RTL language. | ### Tested versions
4.4.dev
### System information
Windows 10
### Issue description
https://github.com/user-attachments/assets/4628dd62-5e3f-41f6-9c4b-76462521dffe
### Steps to reproduce
Choose any RTL language then restart the editor.
Disabling the root `Window` `wrap_controls` fixes it.
### Minimal reproduction project (MRP)
N/A. | bug,needs testing,topic:gui | low | Minor |
2,573,814,563 | TypeScript | Language Service quickinfo always shows destructured properties as having an `any` type | ### ๐ Search Terms
property rename destructured destructuring any quickinfo
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about n/a
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.2#code/MYewdgzgLgBA3gDwFwwJ4F8YF57JgRnQG4AoAejJiuppoD0B+EoA
### ๐ป Code
```ts
const {x} = {x: 1};
// ^? (property) x: any
```
### ๐ Actual behavior
Hovering over a property that's destructured shows `(property) x: any` in quickinfo.
### ๐ Expected behavior
I'd expect hovering over the `x` in the object (`{x: 1}`) to show `(property) x: number` in quickinfo.
Hovering over the variable that the destructuring creates (the `{x}` in my example) shows `const x: number`, which makes sense. If you assign to an object instead of destructuring, this is exactly what you see:
```ts
const obj = {x: 1};
// ^? (property) x: number
```
### Additional information about the issue
There are some _really_ old closed issues that seem related: #1845 and #2024.
This may be related to #56980.
This is pretty inconsequential in ordinary usage, but it becomes quite annoying with my [any-xray extension](https://github.com/danvk/any-xray/):
<img width="449" alt="image" src="https://github.com/user-attachments/assets/68fc9952-a99a-43de-930b-17bc55570bee">
| Bug,Help Wanted | low | Minor |
2,573,852,298 | pytorch | RuntimeError: Backend nccl does not support allgather_into_tensor_coalesced | ### ๐ Describe the bug
I am using torchtune and receive the error in the title whenever it goes to save the model. I created an issue in their repo (https://github.com/pytorch/torchtune/issues/1762), but it seems to me a PyTorch issue. I've seen this with both 2.4.1+cu124 and the nightly version:
```
python -c "import torch; print(torch.__version__); print(torch.cuda.nccl.version())" 17:44
2.6.0.dev20241008+cu124
(2, 21, 5)
```
The following is the command I'm running and the traceback:
```sh
TORCH_CPP_LOG_LEVEL=INFO TORCH_DISTRIBUTED_DEBUG=DETAIL NCCL_DEBUG=INFO NCCL_DEBUG_SUBSYS=INIT,ENV,GRAPH,COLL NCCL_SOCKET_IFNAME="eth0,en,eth,em,bond" tune run --nnodes 1 --nproc_per_node 3 lora_finetune_distributed --config ./recipes/mm_llama2_7B_lora.yaml
```
```sh
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/recipes/lora_finetune_distributed.py", line 862, in <module>
[rank0]: sys.exit(recipe_main())
[rank0]: ^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torchtune/config/_parse.py", line 99, in wrapper
[rank0]: sys.exit(recipe_main(conf))
[rank0]: ^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/recipes/lora_finetune_distributed.py", line 857, in recipe_main
[rank0]: recipe.train()
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/recipes/lora_finetune_distributed.py", line 823, in train
[rank0]: self.save_checkpoint(epoch=curr_epoch)
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/recipes/lora_finetune_distributed.py", line 618, in save_checkpoint
[rank0]: cpu_state_dict = training.get_full_model_state_dict(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torchtune/training/_distributed.py", line 424, in get_full_model_state_dict
[rank0]: full_param = sharded_param.full_tensor()
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/_tensor/api.py", line 511, in full_tensor
[rank0]: redist_res = self.redistribute(
[rank0]: ^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/_tensor/api.py", line 483, in redistribute
[rank0]: return Redistribute.apply(self, device_mesh, placements, async_op)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torch/autograd/function.py", line 574, in apply
[rank0]: return super().apply(*args, **kwargs) # type: ignore[misc]
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/_tensor/_redistribute.py", line 282, in forward
[rank0]: output = redistribute_local_tensor(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/_tensor/_redistribute.py", line 188, in redistribute_local_tensor
[rank0]: new_local_tensor = current_placement._to_replicate_tensor(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/_tensor/placement_types.py", line 234, in _to_replicate_tensor
[rank0]: result = funcol.all_gather_tensor(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/_functional_collectives.py", line 203, in all_gather_tensor
[rank0]: tensor = torch.ops._c10d_functional.all_gather_into_tensor(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zak_jost/lib/python3.11/site-packages/torch/_ops.py", line 1061, in __call__
[rank0]: return self_._op(*args, **(kwargs or {}))
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: Backend nccl does not support allgather_into_tensor_coalesced
1|2|Loss: 1.743589997291565: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [13:55<00:00, 417.70s/it]
[I1007 22:53:00.305572040 TCPStoreLibUvBackend.cpp:115] [c10d - debug] Read callback failed. code:-4095 name:EOF desc:end of file
[I1007 22:53:00.335351548 TCPStoreLibUvBackend.cpp:115] [c10d - debug] Read callback failed. code:-4095 name:EOF desc:end of file
[I1007 22:53:00.370470289 TCPStoreLibUvBackend.cpp:115] [c10d - debug] Read callback failed. code:-4095 name:EOF desc:end of file
W1007 22:53:00.790000 139972489500480 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 8908 closing signal SIGTERM
E1007 22:53:00.904000 139972489500480 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 8907) of binary: /home/zak_jost/bin/python3.11
Traceback (most recent call last):
File "/home/zak_jost/bin/tune", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/zak_jost/lib/python3.11/site-packages/torchtune/_cli/tune.py", line 49, in main
parser.run(args)
File "/home/zak_jost/lib/python3.11/site-packages/torchtune/_cli/tune.py", line 43, in run
args.func(args)
File "/home/zak_jost/lib/python3.11/site-packages/torchtune/_cli/run.py", line 194, in _run_cmd
self._run_distributed(args, is_builtin=is_builtin)
File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/zak_jost/lib/python3.11/site-packages/torchtune/_cli/run.py", line 95, in _run_distributed
run(args)
File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zak_jost/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/home/zak_jost/lib/python3.11/site-packages/recipes/lora_finetune_distributed.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-10-07_22:53:00
host : zak-jost-ray-training
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 8909)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-10-07_22:53:00
host : zak-jost-ray-training
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 8907)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
[I1007 22:53:00.811083325 TCPStoreLibUvBackend.cpp:115] [c10d - debug] Read callback failed. code:-4095 name:EOF desc:end of file
[I1007 22:53:00.811169886 TCPStoreLibUvBackend.cpp:1002] [c10d - debug] Store exit requested
[I1007 22:53:00.811187377 TCPStoreLibUvBackend.cpp:1070] [c10d - debug] UV main loop done: res:1
[I1007 22:53:00.811200287 TCPStoreLibUvBackend.cpp:1076] [c10d - debug] Walking live handles prior to closing clients
[I1007 22:53:00.811211947 TCPStoreLibUvBackend.cpp:1059] [c10d - debug] UV live handle type 12 active:1 is-closing:0
[I1007 22:53:00.811221437 TCPStoreLibUvBackend.cpp:1086] [c10d - debug] Walking live handles after closing clients
[I1007 22:53:00.811232467 TCPStoreLibUvBackend.cpp:1059] [c10d - debug] UV live handle type 12 active:0 is-closing:1
[I1007 22:53:00.811243977 TCPStoreLibUvBackend.cpp:1095] [c10d] uv_loop_close failed with:-16 errn:EBUSY desc:resource busy or locked
[I1007 22:53:00.811275138 TCPStoreLibUvBackend.cpp:1105] [c10d] uv_loop cleanup finished.
```
### Versions
```sh
Collecting environment information...
PyTorch version: 2.6.0.dev20241008+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (conda-forge gcc 13.3.0-1) 13.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Sep 30 2024, 18:08:57) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.149-99.162.amzn2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R32
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
BogoMIPS: 5599.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 96 MiB (6 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] galore-torch==1.0
[pip3] numpy==1.26.4
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241008+cu124
[pip3] torchao==0.5.0
[pip3] torchaudio==2.5.0.dev20241008+cu124
[pip3] torchmetrics==1.4.2
[pip3] torchtune==0.4.0.dev20241008+cpu
[pip3] torchvision==0.20.0.dev20241008+cu124
[pip3] triton==3.0.0
[conda] galore-torch 1.0 pyhd8ed1ab_1 conda-forge
[conda] libmagma 2.7.2 h173bb3b_2 conda-forge
[conda] libmagma_sparse 2.7.2 h173bb3b_3 conda-forge
[conda] libopenvino-pytorch-frontend 2024.4.0 h5888daf_0 conda-forge
[conda] libtorch 2.3.1 cuda120_h2b0da52_300 conda-forge
[conda] mkl 2023.2.0 h84fe81f_50496 conda-forge
[conda] numpy 1.26.4 py311h64a7726_0 conda-forge
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241008+cu124 pypi_0 pypi
[conda] torchao 0.5.0 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241008+cu124 pypi_0 pypi
[conda] torchmetrics 1.4.2 pyhd8ed1ab_0 conda-forge
[conda] torchtune 0.4.0.dev20241008+cpu pypi_0 pypi
[conda] torchvision 0.20.0.dev20241008+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,573,858,543 | godot | WorldEnvironment not rendering | ### Tested versions
- Reproducible in 4.3.stable
- Not reproducible in 4.2.1.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (NVIDIA; 32.0.15.6109) - 13th Gen Intel(R) Core(TM) i9-13900K (32 Threads)
### Issue description
I have a project with a WorldEnvironment node in the (3D) scene tree.
In Godot 4.2.1, the result looks as follows:

In 4.3, loading the same project results in this:

I have tried:
- using the new D3D12 renderer
- creating a new scene with just a camera and a WorldEnvironment node
- using different HDRI files, both `.hdr` and `.exr`
- making sure that none of the cameras had their own dedicated WorldEnvironment that could override the node
- using a `ProceduralSkyMaterial` instead
None of which fixed the problem - the sky never renders.
Interestingly, the lighting does seem to be affected by the `Energy Multiplier` (although I attribute that to the `Ambient Light` - the `Sky Contribution` slider does nothing).
Creating a new project and adding a WorldEnvironment node works fine - it's only when I load my 4.2.1 project with Godot 4.3
Note: I am running the latest NVIDIA Studio Driver.
### Steps to reproduce
Open the MRP in Godot 4.3 and run it - you should see a black screen, despite the scene having a single (current) camera and a configured WorldEnvironment node.
### Minimal reproduction project (MRP)
[krisp_dev_minimal.zip](https://github.com/user-attachments/files/17297387/krisp_dev_minimal.zip)
| enhancement,discussion,topic:editor | low | Minor |
2,573,865,394 | rust | Compiling `cranelift-codegen` with `-Z next-solver` is very slow | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code: https://github.com/bytecodealliance/wasmtime/ at commit https://github.com/bytecodealliance/wasmtime/commit/201b7b4ce4c5946cce8d5cd90ad2b64440864d8b.
```console
git clone git@github.com:bytecodealliance/wasmtime.git
cd wasmtime
git checkout 201b7b4ce4c5946cce8d5cd90ad2b64440864d8b
cd cranelift/codegen
RUSTFLAGS="-Z next-solver" cargo build
```
I expected to see this happen: `cargo build` should succeed in a couple of seconds (which it does without `-Z next-solver`).
Instead, this happened: A clean rebuild with`-Z next-solver` takes 10x longer than without.
These are the hottest functions according to `perf top`:
```
6,25% librustc_driver-57fe6e1841a504ec.so [.] <rustc_middle::ty::context::TyCtxt>::mk_args
5,49% librustc_driver-57fe6e1841a504ec.so [.] <rustc_next_trait_solver::solve::eval_ctxt::EvalCtxt<rustc_trait_selection::solve::delegate::SolverDelegate, rustc_middle::ty::context::TyCtxt>>::evaluate_goal_raw
5,08% librustc_driver-57fe6e1841a504ec.so [.] <rustc_middle::ty::context::CtxtInterners>::intern_ty
4,16% librustc_driver-57fe6e1841a504ec.so [.] <rustc_middle::ty::context::CtxtInterners>::intern_predicate
3,80% librustc_driver-57fe6e1841a504ec.so [.] <rustc_next_trait_solver::canonicalizer::Canonicalizer<rustc_trait_selection::solve::delegate::SolverDelegate, rustc_middle::ty::context::TyCtxt> as rustc_type_ir::fold::TypeFolder<rustc_middle::ty::context::TyCtxt>>::fold_ty
2,74% librustc_driver-57fe6e1841a504ec.so [.] <rustc_next_trait_solver::resolve::EagerResolver<rustc_trait_selection::solve::delegate::SolverDelegate, rustc_middle::ty::context::TyCtxt> as rustc_type_ir::fold::FallibleTypeFolder<rustc_middle::ty::context::TyCtxt>>::try_fold_predicate
2,39% librustc_driver-57fe6e1841a504ec.so [.] <rustc_next_trait_solver::canonicalizer::Canonicalizer<rustc_trait_selection::solve::delegate::SolverDelegate, rustc_middle::ty::context::TyCtxt>>::finalize
2,08% librustc_driver-57fe6e1841a504ec.so [.] <rustc_next_trait_solver::canonicalizer::Canonicalizer<rustc_trait_selection::solve::delegate::SolverDelegate, rustc_middle::ty::context::TyCtxt> as rustc_type_ir::fold::FallibleTypeFolder<rustc_middle::ty::context::TyCtxt>>::try_fold_predicate
1,94% librustc_driver-57fe6e1841a504ec.so [.] <rustc_trait_selection::solve::fulfill::FulfillmentCtxt<rustc_trait_selection::traits::FulfillmentError> as rustc_infer::traits::engine::TraitEngine<rustc_trait_selection::traits::FulfillmentError>>::select_where_possible
1,83% librustc_driver-57fe6e1841a504ec.so [.] <hashbrown::raw::RawTable<((rustc_type_ir::DebruijnIndex, rustc_middle::ty::Ty), rustc_middle::ty::Ty)>>::reserve_rehash::<hashbrown::map::make_hasher<(rustc_type_ir::DebruijnIndex, rustc_middle::ty::Ty), rustc_middle::ty::Ty, core::hash::BuildHasherDefault<rustc_hash::FxHasher>>::{closure#0}>
1,81% librustc_driver-57fe6e1841a504ec.so [.] <rustc_middle::ty::context::TyCtxt>::mk_canonical_var_infos
1,52% librustc_driver-57fe6e1841a504ec.so [.] <rustc_middle::ty::context::TyCtxt>::mk_predefined_opaques_in_body
1,37% librustc_driver-57fe6e1841a504ec.so [.] <hashbrown::map::HashMap<(rustc_type_ir::DebruijnIndex, rustc_middle::ty::Ty), rustc_middle::ty::Ty, core::hash::BuildHasherDefault<rustc_hash::FxHasher>>>::insert
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```console
$ rustc --version --verbose
rustc 1.83.0-nightly (3ae715c8c 2024-10-07)
binary: rustc
commit-hash: 3ae715c8c63f9aeac47cbf7d8d9dadb3fa32c638
commit-date: 2024-10-07
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.1
``` | I-compiletime,T-compiler,C-bug,T-types,WG-trait-system-refactor | low | Critical |
2,573,877,504 | godot | GDScript symbol lookup prefers enum constants over actuall lookup target | ### Tested versions
v4.4.dev.custom_build [4c4e67334]
v4.3.stable.flathub [77dcf97d8]
### System information
Fedora 40
### Issue description
The GDScript code lookup will look for the symbol as class constant of the current script before doing anything with the base, regardless of it's context. In some situations this will lead to a wrong lookup result.
Example:
```gdscript
extends Node
class_name Test
const SIZE_EXPAND = 1
```
```gdscript
extends Control
func _init() -> void:
# Looking up this SIZE_EXPAND should lead to the other script. But it will direct to the enum constant of Control
Test.SIZE_EXPAND
```
### Steps to reproduce
- Open the MRP
- Lookup `SIZE_EXPAND`
### Minimal reproduction project (MRP)
[lookup-mrp.zip](https://github.com/user-attachments/files/17297487/lookup-mrp.zip)
| bug,topic:gdscript,topic:editor | low | Minor |
2,573,881,868 | PowerToys | Request that Workspace Editor auto-closes after launching a workspace | ### Description of the new feature / enhancement
-
### Scenario when this would be used?
-
### Supporting information
_No response_ | Needs-Triage,Product-Workspaces | low | Minor |
2,573,907,279 | godot | Windows Hard Reboots When Waking from Sleep Mode with Godot Running | ### Tested versions
Encountered in 4.3 beta2, 4.3 beta3, 4.3 rc1, 4.3 stable (have not tested other versions)
### System information
Windows 10 Pro (10.0.19045) - Godot 4.3 stable - Forward+ - Vulkan - Hardware: AMD Ryzen 3950X, RTX 3090, 128 GB Ram
### Issue description
When leaving Godot 4.3 running on Windows, the computer hard reboots upon waking from Sleep mode. This behaviour does not occur if Godot is closed before putting the system to sleep. The problem seems to be isolated to instances when Godot is actively running while the system enters Sleep mode.
### Steps to reproduce
Open Godot and run a project or leave it idle in the editor.
Put Windows into Sleep mode.
Wake the computer from Sleep mode.
Observe the system hard rebooting upon waking.
Expected Behaviour: The computer should wake up normally from Sleep mode with Godot running, without triggering a hard reboot.
### Minimal reproduction project (MRP)
I do not believe an MRP is necessary for this | bug,platform:windows,topic:editor | low | Minor |
2,573,916,351 | pytorch | [NCCL] Unordered destruction of `ProcessGroupNCCL` no longer supported | ### ๐ Describe the bug
The `unordered` pg destroy test introduced in https://github.com/pytorch/pytorch/pull/119045 seems to no longer be supported in recent versions of NCCL. When checking with the NCCL team, the feedback was that this behavior has not been supported for several releases, and in general should not be depended upon as the destroy operation is a communicating operation.
Can we safely remove this test (or change it to an expected failure) or is it a proxy for a critical use case somewhere?
CC @shuqiangzhang @kwen2501 @KaimingOuyang @nWEIdia
### Versions
current upstream PyTorch, NCCL 2.23+
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,module: nccl | low | Critical |
2,573,975,913 | rust | Tracking Issue for `const_copy_from_slice` | Feature gate: `#![feature(const_copy_from_slice)]`
This is a tracking issue for using `<[T]>::copy_from_slice` in `const`.
### Public API
```rust
impl<T> [T] {
pub const fn copy_from_slice(&mut self, src: &[T])
where
T: Copy;
}
```
### Steps / History
- [x] Implementation: https://github.com/rust-lang/rust/pull/131416
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,573,989,567 | kubernetes | bug: No 'time' added when server-side-applying the same yaml as a 2nd field manager | ### What happened?
We have a use case where two field managers co-own some of `.metadata.managedFields`. It is observed that the 'time' is missing after the 2nd field manager server-side-applied its configuration, when that applied configuration is the same with that of the 1st field manager.
An example of `.metadata.managedFields` which demonstrates what is missing:
```
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:creationTimestamp: {}
f:labels:
f:app: {}
f:spec:
f:containers:
k:{"name":"nginx"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: jun-apply-again
operation: Apply
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:creationTimestamp: {}
f:labels:
f:app: {}
f:spec:
f:containers:
k:{"name":"nginx"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: jun
operation: Apply
time: "2024-10-01T21:53:28Z"
```
Field manager 'jun' has `"2024-10-01T21:53:28Z"` as the applied 'time', but field manager 'jun-apply-again' doesn't have any 'time'.
### What did you expect to happen?
Each item in the list of managed fields should consistently have a 'time' associated.
### How can we reproduce it (as minimally and precisely as possible)?
To help reproduce the bug, I documented the exact command lines and needed manifests [here](https://github.com/waltforme/random/blob/main/kubernetes-managedfields/apply_HTTP-PATCH.md).
For a little broader background, one can optionally read this [README.md](https://github.com/waltforme/random/blob/main/kubernetes-managedfields/README.md).
### Anything else we need to know?
_No response_
### Kubernetes version
I built the kube-apiserver from 7ee17ce9b7c2a22e63e2bbd79d48d3fe349a9386.
<details>
```console
$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.1", GitCommit:"8f94681cd294aa8cfd3407b8191f6c70214973a4", GitTreeState:"clean", BuildDate:"2023-01-18T15:58:16Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"32", GitVersion:"v0.0.0-master+$Format:%H$", GitCommit:"$Format:%H$", GitTreeState:"", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.23.0", Compiler:"gc", Platform:"linux/arm64"}
error: could not parse pre-release/metadata (-master+$Format:%H$) in version "v0.0.0-master+$Format:%H$"
```
</details>
### Cloud provider
N/A
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,triage/accepted | low | Critical |
2,574,017,906 | neovim | complete: keep pum visible when pressing <BS> for `complete()` | ### Problem
When completing via `complete()`, and then using backspace, it will close the pum. This is not consistent with how it works for omnifunc. It also seems like this is something that has been discussed previously (see: https://github.com/neovim/neovim/pull/27339#discussion_r1583574488)
I am opening this issue as I couldn't find a specific issue for this (sorry if there is one and I just couldn't find this).
I spent some time now to see if I could figure out why and where this is happening, and I seem to have found a patch that could potentially fix this. I don't really know yet if this would break other things, and also why this was added as a guard in the first place.
```diff
diff --git a/src/nvim/insexpand.c b/src/nvim/insexpand.c
index 84dd55fa7..bd83b376c 100644
--- a/src/nvim/insexpand.c
+++ b/src/nvim/insexpand.c
@@ -1759,7 +1759,6 @@ int ins_compl_bs(void)
// Respect the 'backspace' option.
if ((int)(p - line) - (int)compl_col < 0
|| ((int)(p - line) - (int)compl_col == 0 && !ctrl_x_mode_omni())
- || ctrl_x_mode_eval()
|| (!can_bs(BS_START) && (int)(p - line) - (int)compl_col
- compl_length < 0)) {
return K_BS;
```
I could possibly be interested in trying to tackle this problem if it is wanted, and if so I am also going to try and upstream the solution to vim of course
So I guess my question is; is this an acceptable solution? If not, any pointers on where to start?
### Expected behavior
I expect it to still show the pum after backspace when using `complete()` just like how it works for omnifunc.
Before the patch:
https://github.com/user-attachments/assets/0b1c8e8f-6e03-476d-aea6-997fd9ef5a8a
After the patch:
https://github.com/user-attachments/assets/55af39ae-9b25-4ddd-ab7a-bc3b6e173562
| bug-vim | low | Minor |
2,574,030,355 | pytorch | Support multiple ragged dims for NJT | Currently, NJT has a hard restriction that only a single dim can be ragged. It's useful to generalize this, allowing any non-batch dim to be ragged, for certain use cases:
* Images of ragged width / height (e.g. as in [SAM](https://github.com/pytorch-labs/segment-anything-fast))
* The math fallback of SDPA computes QK^T, resulting in an intermediate with two ragged dims
Following from #137512, this issue proposes that we relax this restriction and allow for multiple ragged non-batch dims per NJT, with each defined as ragged with respect to the batch dim. For example: an NJT of shape `(B, J1*, J2*, D)` with ragged `J1*` and `J2*` has both `J1*` and `J2*` ragged wrt the batch dim, which can be described by two `offsets` tensors of shape `B + 1`. This allows for use of preexisting kernels written to handle data in this format (e.g. the jagged <-> padded dense conversion kernels from fbgemm).
There are some open things to resolve:
* Should we allow mixed `offsets` / `lengths` metadata? e.g. `offsets` for one ragged dim and `lengths` for another
* Should we introduce new layouts for such NJTs? e.g. `torch.jagged2d`, `torch.jagged3d`, etc.
cc @cpuhrsch @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: nestedtensor | low | Minor |
2,574,031,292 | kubernetes | Support validate=strict for all component configs | The Kubernetes API supports [strict validation](https://kubernetes.io/blog/2023/04/24/openapi-v3-field-validation-ga/#server-side-field-validation). `kubectl ... --validate='strict'` is also available.
~But there is no way to use this feature with component config files. Should there be? How would it be enabled? Could we define an annotation to enable it?~
EDIT: Turns out that many config files have strict validation enabled, but not all.
/sig api-machinery | sig/api-machinery,triage/accepted | medium | Major |
2,574,040,871 | pytorch | comparing tensor sizes of different lengths should not guard symbolic dimensions | ```
@torch.compile(fullgraph=True, dynamic=True, backend="inductor")
def f(x):
b = x[0]
if(x.size()==b.size()):
return x
return x*2
```
len(x.size()) is 2
and len(b.size()) is 1
sounds like we do not need to add the guards bellow in that case.
```
[0/1] [__guards] +- LAMBDA_GUARD: Ne(L['x'].size()[0], L['x'].size()[1]) # if a != b: # _dynamo/polyfills/__init__.py:61 in list_cmp (_dynamo/variables/tensor.py:1124 in evaluate_expr)
[0/0] [__guards] +- LAMBDA_GUARD: L['x'].size()[1] == L['x'].size()[0] # if a != b: # _dynamo/polyfills/__init__.py:61 in list_cmp (_dynamo/variables/tensor.py:1124 in evaluate_expr)
```
I encountered this also while working on something related to functionalization and had that check. workaround for now is
```
def f(x):
b = x[0]
if(len(x.size())== len(b.size()) and x.size()==b.size()):
return x
return x*2
```
cc @ezyang @chauhang @penguinwu @bobrenjc93 @zou3519 | triaged,oncall: pt2,module: dynamic shapes | low | Minor |
2,574,041,632 | next.js | Stale link navigations are updating the UI in local dev | ### Link to the code that reproduces this issue
https://github.com/samselikoff/stale-link-updates-in-dev
### To Reproduce
1. Start the application in development (`npm run dev`)
2. Quickly click Link 1, Link 2, then Link 3
3. You'll see the UI update with all three navigations โย it will show "Post 1", "Post 2" and "Post3"
### Current vs. Expected behavior
Current:
The UI updates with all link navigations, including stale ones.
Expected:
The UI should discard stale link navigations, and only render the latest one once it settles.
Note: This behavior only happens on dev. When I deploy to Vercel, the UI only shows the final navigation.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 65536
Available CPU cores: 10
Binaries:
Node: 20.9.0
npm: 10.1.0
Yarn: N/A
pnpm: 9.7.1
Relevant Packages:
next: 15.0.0-canary.179 // Latest available version is detected (15.0.0-canary.179).
eslint-config-next: N/A
react: 19.0.0-rc-2d16326d-20240930
react-dom: 19.0.0-rc-2d16326d-20240930
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
The `/post/[id]/page.tsx` is an RSC that awaits a 1-second Promise. The stale updates only happen in dev.
Here's a video:
https://github.com/user-attachments/assets/cd5a2852-de0d-452c-a21e-2973f7337863
Here are some links to quickly see the two different behaviors:
- โ StackBlitz dev build: https://stackblitz.com/github/samselikoff/stale-link-updates-in-dev?file=app%2Flayout.tsx
- โ
Vercel prod build: https://stale-link-updates-in-dev.vercel.app | bug,Navigation | low | Minor |
2,574,058,017 | node | V8 isolate race condition/use-after-free with --shared_string_table flag | Present in v22.x and main, happens here:
https://github.com/nodejs/node/blob/v22.9.0/deps/v8/src/execution/isolate.cc#L4600-L4610
`process_wide_shared_space_isolate_` is a static variable that is read and written without holding a mutex.
Worse, it's a pointer to a "toplevel" isolate (for want of a better word) that can go away before the current isolate is disposed, resulting in a use-after-free (most likely: a crash.)
It's hard to reliably demonstrate with node but it's pretty easy to reproduce with standalone V8, some threads, and patience. It shows up as a DCHECK in debug builds:
```
# Fatal error in ../deps/v8/src/heap/safepoint.cc, line 338
# Debug check failed: (clients_head_) == nullptr.
```
V8 fixed it last month in v8/v8@7710cb8ee75501b3ac3b72b424a1b75d54d119c0 and, caveat emptor, it removes the problematic static variable but I'm not 100% sure it fixes the lifecycle bug.
I don't have good suggestions to offer except maybe remove the flag (and `--harmony_struct` because it has the same issue.) | v8 engine | low | Critical |
2,574,106,241 | node | [rfc] V8 debug build CI? | Correct me if I'm wrong but I believe there's currently no CI job that builds V8 with all debug checks enabled?
I'm doing some local light testing in that mode and so many bugs fall out, it's not even funny (ex. #55325.) Testing that regularly would be a Very Good Thing indeed.
On a very related subject: can I suggest mirroring V8's `v8_enable_debugging_features=true` feature set? Right now, the configure flags one needs to use to get a build that's similar to an upstream debug build is... the words "haphazard" and "scattered" come to mind.
I know naming is hard but... `./configure --v8-non-optimized-debug`? For real?!
cc @targos | discuss,v8 engine | low | Critical |
2,574,127,080 | angular | Css Intellisense for inline styles ([style] binding) | ### Which @angular/* package(s) are the source of the bug?
elements
### Is this a regression?
Yes
### Description
Is there no way to get intellisense in [style] binding?

### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
_No response_
### Anything else?
_No response_ | area: language-service | low | Critical |
2,574,127,748 | pytorch | xpu: support triton against clang with nightly wheels | Using XPU nightly wheel works for me to run pytorch in eager mode on Intel GPU (I used PVC). For that it seem there is no need to install oneAPI following https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpu/2-5.html. This makes sense considering that the torch wheel includes libsycl-preview.so library which is a key dependency for XPU backend path.
However, when I try `torch.compile()`, i.e. triton, it does not work for me unless I install oneAPI. Can torch XPU wheel and maybe XPU triton build be modified in a way that oneAPI installation won't be required similar to pytorch eager mode?
There are at least these things to take care of:
- [ ] SYCL header files are missing installing torch xpu wheel
- [ ] SYCL runtime is missing (`libsycl.so`) or might need a fix on triton xpu side to switch to `libsycl-preview.so`
- [ ] Triton XPU might need fixes to support specific clang versions or requirement called out explicitly (that's based on my attempt to feed the build with SYCL headers - it still fails with not supporting `std::is_signed_v` and few other C++ features)
Tried:
```
$ cat test.py
import torch
def foo(x, y):
a = torch.sin(x)
b = torch.cos(y)
return a + b
foocomp = torch.compile(foo)
t = foocomp(torch.randn(10, 10).to('xpu'), torch.randn(10, 10).to('xpu'))
print('torch compile sin+cos:', t)
```
Result:
```
$ python3 test.py
In file included from /tmp/tmp3tpt71ga/main.cpp:17:
/usr/local/lib/python3.10/dist-packages/triton/backends/intel/include/sycl_functions.h:15:10: fatal error: 'sycl/sycl.hpp' file not found
#include <sycl/sycl.hpp>
^~~~~~~~~~~~~~~
1 error generated.
<...>
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CalledProcessError: Command '['/usr/bin/clang++', '/tmp/tmp3tpt71ga/main.cpp', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmp3tpt71ga/spirv_utils.cpython-310-x86_64-linux-gnu.so', '-lze_loader', '-lsycl', '-L/usr/local/lib/python3.10/dist-packages/triton/backends/intel/lib', '-I/usr/local/include', '-I/usr/local/lib/python3.10/dist-packages/triton/backends/intel/include', '-I/tmp/tmp3tpt71ga', '-I/usr/include/python3.10', '-I/usr/local/lib/python3.10/dist-packages/numpy/_core/include']' returned non-zero exit status 1.
```
Tried on docker image built with the following Dockerfile:
```
ARG IMAGE=ubuntu:22.04
FROM $IMAGE
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends \
gpg \
python3 \
python3-pip \
wget && \
rm -rf /var/lib/apt/lists/*
RUN pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/xpu/
RUN pip3 install numpy
ARG DGPU_KEY_FILE=/usr/share/keyrings/intel-graphics.gpg
RUN wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | \
gpg --yes --dearmor --output $DGPU_KEY_FILE
RUN echo "deb [arch=amd64 signed-by=$DGPU_KEY_FILE] https://repositories.intel.com/gpu/ubuntu jammy/lts/2350 unified" | \
tee /etc/apt/sources.list.d/intel-gpu.list
# Installing packages to handle the following errors.
#
# Importing torch:
# "ImportError: libze_loader.so.1: cannot open shared object file: No such file or directory" - needs level-zero
#
# Running eager mode operations on XPU:
# * "RuntimeError: No XPU devices are available." - needs intel-level-zero-gpu
#
# Running convolution on XPU:
# * "RuntimeError: could not create an engine" - needs intel-opencl-icd
RUN apt-get update && apt-get install -y --no-install-recommends \
intel-level-zero-gpu \
intel-opencl-icd \
level-zero && \
rm -rf /var/lib/apt/lists/*
# Installing packages for torch.compile to work (as far as we can get as of now):
RUN apt-get update && apt-get install -y --no-install-recommends \
clang && \
level-zero-dev && \
rm -rf /var/lib/apt/lists/*
```
Versions in the build I've tried:
```
# pip3 list
Package Version
------------------ ---------------------
filelock 3.13.1
fsspec 2024.6.1
Jinja2 3.1.4
MarkupSafe 2.1.5
mpmath 1.3.0
networkx 3.3
numpy 2.1.2
packaging 22.0
pip 22.0.2
Pygments 2.11.2
pytorch-triton-xpu 3.1.0+91b14bf559
PyYAML 5.4.1
setuptools 59.6.0
sympy 1.13.1
torch 2.6.0.dev20241007+xpu
typing_extensions 4.12.2
wheel 0.37.1
```
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5 @vlad-penkin
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,574,158,441 | pytorch | Support torch.cond with differing (but same dimensionality) sizes on true/false branches | ### ๐ Describe the bug
https://fb.workplace.com/groups/6829516587176185/permalink/8109696255824872/
Today, we require a torch.cond to return the same size in both branches. It is possible for us to support returns for tensors with differing sizes. When this occurs, we must create a new unbacked symint, whose size is determined from the branch output itself.
Here's a simple repro:
```
import torch
torch._dynamo.config.capture_scalar_outputs = True
def true_fn(x):
return torch.randn(10)
def false_fn(x):
return torch.randn(x[1].item())
@torch.compile(fullgraph=True)
def f(x):
u0, u1 = x.tolist()
return torch.cond(u0 == 20, true_fn, false_fn, (x,))
f(torch.tensor([20, 21]))
```
### Versions
main
cc @chauhang @penguinwu @bobrenjc93 | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,574,172,692 | rust | SIGSEGV: rustc crashed on valid code | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
#[no_mangle]
pub static mut a: i32 = 0;
pub static mut g: [i32; 1] = [0; 1];
fn myfunc() {
unsafe {
while a != 0 {
let mut c = &mut a;
let mut b = 7;
while b != 0 {
let mut f = 1;
while f <= 9 {
let mut d = 0;
while d <= 9 {
if !(*g.as_mut_ptr() != 0) {
*c = 0;
}
d += 1;
}
*g.as_mut_ptr() = 2;
f += 1;
}
b -= 1;
}
*g.as_mut_ptr() = b;
}
}
}
pub fn main() {
myfunc();
}
```
I expected to see this happen: *rustc compiles it*
Instead, this happened: **When compiling with `rustc -Copt-level=1`, rustc crashes with a segmentation fault.**
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (3ae715c8c 2024-10-07)
binary: rustc
commit-hash: 3ae715c8c63f9aeac47cbf7d8d9dadb3fa32c638
commit-date: 2024-10-07
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.1
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```console
% rustc -Awarnings test.rs -Zmir-opt-level=0 -Copt-level=1
error: rustc interrupted by SIGSEGV, printing backtrace
/home/shaohua/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-57fe6e1841a504ec.so(+0x3609be3)[0x7f0887209be3]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f0883842520]
/home/shaohua/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.83.0-nightly(_ZN4llvm19simplifyInstructionEPNS_11InstructionERKNS_13SimplifyQueryE+0x42)[0x7f0881ac6102]
/home/shaohua/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.83.0-nightly(_ZN4llvm15ScalarEvolution10createSCEVEPNS_5ValueE+0x364)[0x7f0881ca7464]
### cycle encountered after 4 frames with period 6
/home/shaohua/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.83.0-nightly(_ZN4llvm15ScalarEvolution14createSCEVIterEPNS_5ValueE+0x41e)[0x7f0881ca5b74]
/home/shaohua/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.83.0-nightly(_ZN4llvm15ScalarEvolution27createNodeFromSelectLikePHIEPNS_7PHINodeE+0x2c5)[0x7f0881c9a88d]
/home/shaohua/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.83.0-nightly(_ZN4llvm15ScalarEvolution10createSCEVEPNS_5ValueE+0x378)[0x7f0881ca7478]
/home/shaohua/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.83.0-nightly(_ZN4llvm15ScalarEvolution14createSCEVIterEPNS_5ValueE+0x41e)[0x7f0881ca5b74]
/home/shaohua/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.83.0-nightly(_ZN4llvm15ScalarEvolution27createNodeFromSelectLikePHIEPNS_7PHINodeE+0x2c5)[0x7f0881c9a88d]
/home/shaohua/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libLLVM.so.19.1-rust-1.83.0-nightly(_ZN4llvm15ScalarEvolution10createSCEVEPNS_5ValueE+0x378)[0x7f0881ca7478]
### recursed 42 times
note: rustc unexpectedly overflowed its stack! this is a bug
note: maximum backtrace depth reached, frames may have been lost
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=16777216
[1] 1426002 segmentation fault rustc -Awarnings test.rs -Copt-level=1
```
</p>
</details>
| I-crash,A-LLVM,T-compiler,C-bug | low | Critical |
2,574,181,966 | vscode | 'Open Editors' panel has unnecessary padding | Hello VSCode friends,
The 'Open Editors' panel seems to have some extra unneeded padding on the left-hand side:

It looks like some part of this is to provide a 'close' `x` button or an "unsaved changes" dot indicator, and some of the padding serves no purpose that I can see. I have a pretty small laptop screen, so this extra space could be put to better use rendering longer file names.
Would it make sense to remove the extra padding, and change the `x` button to only appear on hover on the right-hand-side, like the behaviour in the SCM commits panel?

If you think this change makes sense, I would be happy to attempt changing it myself. If you think this is too minor an issue to address, or shouldn't be fixed, by all means feel free to close. | bug,open-editors,workbench-auxsidebar | low | Minor |
2,574,205,294 | react | Bug: Input don't send "select" events when type is "email" | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: reproduced on 17 and 18
## Steps To Reproduce
1. Create an input like this: `<input text="move the selection around" onSelect={console.log} type="email" />`
2. Move the selection around and see that onSelect is emitted only on focus and blur. Notice that you get normal behavior if you remove the type
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example:
https://codesandbox.io/p/sandbox/recursing-kilby-z6h9dz?file=%2Fsrc%2FApp.js%3A15%2C1
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
Does not emit "select" events
## The expected behavior
Should emit the "select" events | Status: Unconfirmed,Resolution: Stale | low | Critical |
2,574,213,211 | pytorch | FSDP-2 doesnt do overlapping when composed with TP | ### ๐ Describe the bug
with TP

without TP

seems like with TP it calls `torch.ops.fsdp.split_with_sizes_copy` which leads to synchronization
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241008+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.3 (Plow) (x86_64)
GCC version: (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-362.18.1.el9_3.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6448Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241008+cu124
[pip3] triton==3.0.0
[conda] Could not collect
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | needs reproduction,oncall: distributed,triaged | low | Critical |
2,574,230,541 | pytorch | Report WHY a symbol was created dynamically in symbolic_shapes logs | ### ๐ Describe the bug
Mainly, want to know if it was due to (1) automatic dynamic, or (2) assume_static_by_default = False, or (3) mark_dynamic.
Internal xref: https://fb.workplace.com/groups/3095840833991792/permalink/3848647042044497/
### Versions
main
cc @chauhang @penguinwu @bobrenjc93 | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,574,304,285 | tensorflow | RuntimeError: failed to create XNNPACK runtimeNode number 2977 (TfLiteXNNPackDelegate) failed to prepare. | ### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Google Colab with Python 3.10.12
- TensorFlow installation (pip package or built from source):
pip package
- TensorFlow library (version, if pip package or github SHA, if built from source):
v2.17.0
### 2. Code
```
import tensorflow as tf
saved_model_dir = '/content/saved_model'
num_calibration_steps = 100
input = tf.cast(tf.random.normal((1, 640, 640, 3)), tf.float32)
dummy_input = tf.cast(tf.random.normal((1, 2)), tf.int64)
def representative_dataset_gen():
for _ in range(num_calibration_steps):
yield [dummy_input, input] #model has 2 input tensors
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS_INT8,
tf.lite.OpsSet.SELECT_TF_OPS
]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
tflite_quant_model = converter.convert()
# Save the quantized model to a local file
with open('quantized_model.tflite', 'wb') as f:
f.write(tflite_quant_model)
```
### 3. Failure after conversion
After converting the model from ONNX using onnx2tf, I got saved_model, which I tried to convert to int8 quantized model using the code above. When trying to inference the model, after reading the model by the interpreter and calling the function `allocate_tensors()`
```
interpreter = tf.lite.Interpreter(model_path="/content/quantized_model.tflite")
interpreter.allocate_tensors()
```
I get the following error:
```
RuntimeError Traceback (most recent call last)
[<ipython-input-7-b6b80a3bdf94>](https://localhost:8080/#) in <cell line: 6>()
4 interpreter = tf.lite.Interpreter(model_path="/content/quantized_model.tflite")
5 print(interpreter.get_input_details())
----> 6 interpreter.allocate_tensors()
[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/interpreter.py](https://localhost:8080/#) in allocate_tensors(self)
535 def allocate_tensors(self):
536 self._ensure_safe()
--> 537 return self._interpreter.AllocateTensors()
538
539 def _safe_to_run(self):
RuntimeError: failed to create XNNPACK runtimeNode number 2977 (TfLiteXNNPackDelegate) failed to prepare.
```
Could someone give me some advice, suggestions on how to solve this error? I couldn't even find that anyone has solved the same problem.
The closest to this error is this [issue](https://github.com/tensorflow/tensorflow/issues/61395), but the workaround is to convert the ONNX model to Keras and for my complex model it is not possible to fix.
| stat:awaiting tensorflower,type:bug,comp:lite,TFLiteConverter,2.17 | low | Critical |
2,574,334,208 | PowerToys | Chat GPT in advanced paste | ### Description of the new feature / enhancement
Mi nueva propuesta es que en la funcion de pegado avanzado le puedas preguntar cosas a Chat GPT
### Scenario when this would be used?
Suppose you are typing a text in German and you don't know how to say open, you can ask Chat GPT through the special paste, it will translate it and paste it automatically.
Another case would be that for example you are making a summary but you don't have time, you tell Chat GPT through the special paste, it will summarize and paste it automatically.
### Supporting information
You can use all the functions of Chat GPT but in advanced pasting, all the functions but you can generate images. | Needs-Triage | low | Minor |
2,574,354,063 | angular | Signal for TransferState data | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
When creating an app that uses Signals and TransferState, it was found that we had to do a fair amount of boilerplate for this to work with persisting the signal state to TransferState and then loading it again on the browser:
- Create a state key with `makeStateKey` that requires a new constant with duplicated type and string key
- Inject TransferState
- Create a signal property
- Set the signal on the server once data is loaded
- Set the TransferState data keys for each signal
- Set the signal on the browser from TransferState data
It would be handy if there is a signal that wraps TransferState and can remove some of this boilerplate to just:
- Create a `transferStateSignal` property, giving it the key and initial value
- Set the signal on the server once data is loaded
- (Signal is automatically loaded on the browser from TransferState data)
No state key, TransferState injection, or setting/loading TransferState is needed.
### Proposed solution
Here is a quick example of a prototype of what this could look like:
```ts
import { inject, makeStateKey, signal, TransferState, WritableSignal } from "@angular/core";
export function transferStateSignal<T>(key: string, initialValue: T): WritableSignal<T> {
const stateKey = makeStateKey<T>(key);
const transferState = inject(TransferState);
const value = transferState.hasKey(stateKey)
? transferState.get(stateKey, initialValue)
: initialValue;
const signalValue = signal(value);
const signalProxy = new Proxy(signalValue, {
// override `set` method of signalValue to set on TransferState
get: (target, prop, receiver) => {
if (prop === 'set') {
return (newValue: T) => {
transferState.set(stateKey, newValue);
target.set(newValue);
};
}
return Reflect.get(target, prop, receiver);
}
});
return signalProxy;
}
```
Example usage:
```ts
import { isPlatformBrowser, isPlatformServer } from '@angular/common';
import { ChangeDetectionStrategy, Component, computed, Inject, PLATFORM_ID } from '@angular/core';
import { transferStateSignal } from '../transferStateSignal';
@Component({
selector: 'app-transfer-state-signal',
standalone: true,
imports: [],
template: `
<h1>TransferStateSignal Example</h1>
<p>String: {{dataString()}}</p>
<p>Number: {{dataNumber()}}</p>
<p>Boolean: {{dataBoolean()}}</p>
<p>Object.key: {{dataObject().key}}</p>
<p>Computed: {{computedNumber()}}</p>
`,
changeDetection: ChangeDetectionStrategy.OnPush,
})
export class TransferStateSignalComponent {
readonly dataString = transferStateSignal<string>('dataString', 'Hello, World!');
readonly dataNumber = transferStateSignal<number>('dataNumber', 42);
readonly dataBoolean = transferStateSignal<boolean>('dataBoolean', true);
readonly dataObject = transferStateSignal<{ key: string }>('dataObject', { key: 'value' });
readonly computedNumber = computed<number>(() => this.dataNumber() * 100);
constructor(
@Inject(PLATFORM_ID) private readonly platformId: Object
) { }
ngOnInit(): void {
if (isPlatformServer(this.platformId)) {
// pretend we load data on the server here, then set in signals for SSR,
// and no need to set on TransferState as it's done in transferStateSignal
this.dataString.set('Hello, Server!');
this.dataNumber.set(24);
this.dataBoolean.set(false);
this.dataObject.set({ key: 'server' });
} else if (isPlatformBrowser(this.platformId)) {
setInterval(() => {
this.dataNumber.set(this.dataNumber() + 1);
}, 1000);
}
}
}
```
Compare this to what is required to do this without `transferStateSignal`:
```ts
import { isPlatformBrowser, isPlatformServer } from '@angular/common';
import { ChangeDetectionStrategy, Component, computed, Inject, makeStateKey, PLATFORM_ID, signal, TransferState } from '@angular/core';
// duplicated effort of having to make a state key for each and ensure data types match
const dataStringKey = makeStateKey<string>('dataStringCurrent');
const dataNumberKey = makeStateKey<number>('dataNumberCurrent');
const dataBooleanKey = makeStateKey<boolean>('dataBooleanCurrent');
const dataObjectKey = makeStateKey<{ key: string }>('dataObjectCurrent');
@Component({
selector: 'app-current',
standalone: true,
imports: [],
template: `
<h1>Current Example</h1>
<p>String: {{dataString()}}</p>
<p>Number: {{dataNumber()}}</p>
<p>Boolean: {{dataBoolean()}}</p>
<p>Object.key: {{dataObject().key}}</p>
<p>Computed: {{computedNumber()}}</p>
`,
changeDetection: ChangeDetectionStrategy.OnPush,
})
export class CurrentComponent {
readonly dataString = signal<string>('Hello, World!');
readonly dataNumber = signal<number>(42);
readonly dataBoolean = signal<boolean>(true);
readonly dataObject = signal<{ key: string }>({ key: 'value' });
readonly computedNumber = computed<number>(() => this.dataNumber() * 100);
constructor(
@Inject(PLATFORM_ID) private readonly platformId: Object,
private readonly transferState: TransferState, // extra import
) { }
ngOnInit(): void {
if (isPlatformServer(this.platformId)) {
// pretend we load data on the server here, then set in signals for SSR
this.dataString.set('Hello, Server!');
this.dataNumber.set(24);
this.dataBoolean.set(false);
this.dataObject.set({ key: 'server' });
// duplicated effort of setting on TransferState
this.transferState.set(dataStringKey, this.dataString());
this.transferState.set(dataNumberKey, this.dataNumber());
this.transferState.set(dataBooleanKey, this.dataBoolean());
this.transferState.set(dataObjectKey, this.dataObject());
} else if (isPlatformBrowser(this.platformId)) {
// duplicated effort of getting from TransferState
this.dataString.set(this.transferState.get(dataStringKey, this.dataString()));
this.dataNumber.set(this.transferState.get(dataNumberKey, this.dataNumber()));
this.dataBoolean.set(this.transferState.get(dataBooleanKey, this.dataBoolean()));
this.dataObject.set(this.transferState.get(dataObjectKey, this.dataObject()));
setInterval(() => {
this.dataNumber.set(this.dataNumber() + 1);
}, 1000);
}
}
}
```
### Alternatives considered
Will consider creating this as a third-party OSS library if the Angular team is not interested in adding something like this to the framework. | feature,area: core,area: server,core: reactivity,cross-cutting: signals | low | Major |
2,574,392,394 | flutter | [Impeller] RenderPass::SetPipeline should accept a pipeline future and not the actual pipleine | Context is https://github.com/flutter/engine/pull/55694
> So I think this is a problem in the HAL design. We expect to be able to have a pipeline synchronously for encoding but
1) we can only get the pipeline on a particular thread for GLES
2) we don't actually need the pipeline until we render - which always happens on the react thread.
I think it would be something to punt to a new PR, but I would consider change the HAL design such that SetPipeline accepted the pipleine future. The Vulkan/Metal implementation would call WaitAndGet immediately while the GLES implementation would defer that until encoding. | P3,e: impeller,team-engine,triaged-engine | low | Minor |
2,574,417,312 | go | x/text/unicode/bidi: nested isolates don't produce correct visual order | Consider the following bit of HTML:
```
<p dir="ltr">
The title is <span dir="rtl">ืืื <span dir="ltr">C++</span> ืืื</span> in Hebrew.
</p>
```
This is a Latin paragraph containing a (faux) Hebrew book title that itself contains the Latin name "C++". The title as a whole should render right-to-left, with C++ rendering left-to-right. That is, it should render like this:

Without the spans, i.e.
```
<p dir="ltr">
The title is ืืื C++ ืืื in Hebrew.
</p>
```
this would render the title as 3 independent runs, resulting in the incorrect

The spans map directly to Right-to-Left Isolate (RLI, U+2067), Left-to-Right Isolate (LRI, U+2066), and Pop Directional Isolate (PDI, U+2069). As a Go string, this is
```
"The title is \u2067ืืื \u2066C++\u2069 ืืื\u2069 in Hebrew."
```
which I call the "annotated" version of the plain string
```
"The title is ืืื C++ ืืื in Hebrew."
```
However, when I run the following code that uses the `bidi` package, both the plain and the annotated string result in the same, incorrect visual order:
```
package main
import (
"fmt"
"log"
"golang.org/x/text/unicode/bidi"
)
func main() {
plain := "The title is ืืื C++ ืืื in Hebrew."
// This uses RLI, LRI, and PDI to achieve the equivalent to
// The title is <span dir="rtl">ืืื <span dir="ltr">C++</span> ืืื</span> in Hebrew.
annotated := "The title is \u2067ืืื \u2066C++\u2069 ืืื\u2069 in Hebrew."
for _, s := range []string{plain, annotated} {
var p bidi.Paragraph
p.SetString(s, bidi.DefaultDirection(bidi.LeftToRight))
ord, err := p.Order()
if err != nil {
log.Fatal(err)
}
for i := range ord.NumRuns() {
run := ord.Run(i)
fmt.Printf("%d %d %q\n", i, run.Direction(), run.String())
}
fmt.Println()
}
}
```
```
0 0 "The title is "
1 1 "ืืื"
2 0 " C++ "
3 1 "ืืื"
4 0 " in Hebrew."
0 0 "The title is \u2067"
1 1 "ืืื \u2066"
2 0 "C++"
3 1 "\u2069 ืืื"
4 0 "\u2069 in Hebrew."
```
`bidi.go` has the following comment:
```
// This API tries to avoid dealing with embedding levels for now. Under the hood
// these will be computed, but the question is to which extent the user should
// know they exist. We should at some point allow the user to specify an
// embedding hierarchy, though.
```
but I'd still expect the computed visual order to be correct with respect to the embedding levels, even if the levels themselves aren't exposed to the user.
I've confirmed with Firefox and Chrome that my use of RLI/LRI/PDI produces the expected rendering that is identical to the one using spans.
(Take special care when reading this issue in a browser that handles right-to-left text, the strings in the code samples and output will be displayed in visual order, not logical order. I've attached all code as an archive to avoid confusion. For Emacs users, `(setq bidi-display-reordering nil)` is a handy way of disabling reordering to be able to inspect file contents in logical order.)
[bidi.tar.gz](https://github.com/user-attachments/files/17300136/bidi.tar.gz)
| NeedsInvestigation | low | Minor |
2,574,429,086 | pytorch | Sparse BSR matrix returns `False` is `.is_sparse()` | ### ๐ Describe the bug
The sparse BSR tensor is something like the CSR tensor but with blocks instead of fine-grained elements. Using `is_sparse()` on a BSR object returns `False`, which should instead be `True` because it is also a sparse matrix.
Here is some code to reproduce this issue:
``` python
a = torch.randn((10,10), dtype=torch.float)
s = a.to_sparse_bsr((2,2))
s.is_sparse()
```
### Versions
(.pytorch_new) sameer@rx2540m6-2022-4:~/gitrepos/llama-survey/scratch-scripts/spmm_perf/smat/src/pytorch_wrapper$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+git0d1701f
Is debug build: False
CUDA used to build PyTorch: 12.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.8
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.103
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
Nvidia driver version: 535.183.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
CPU max MHz: 3200.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0a0+git0d1701f
[conda] Could not collect
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | module: sparse,triaged | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.