id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,520,710,756 | puppeteer | [Feature]: Include units in https://pptr.dev/api/puppeteer.page.metrics and https://pptr.dev/api/puppeteer.metrics | ### Feature description
While the comments in https://pptr.dev/api/puppeteer.page.metrics and https://pptr.dev/api/puppeteer.metrics are at least present, it lacks one crucial feature, it does not describe the units of the measurement for each property.
For example:
* `LayoutDuration` → seconds, milliseconds, etc?
* `JSHeapUsedSize` → bytes, mb, etc?
Mostly can be figured out, but the documentation would be greatly improved by including the units. | feature,documentation,P3 | low | Minor |
2,520,713,607 | vscode | vscode.dev type acquisition fails if project has a `GitHub` dependency | 1. For the project https://github.com/mjbvz/test-ts-ata-web, open on GitHub.dev/vscode.dev: https://github.dev/mjbvz/test-ts-ata-web
1. Open `index.ts`
**Bug**
Loading never finishes
The root cause is the our type acquisition fails:
```
panicked at 'Git dependencies are not enabled. (While trying to process git+ssh://git@github.com/markedjs/marked.git#b47358cb1711b29237e074737cb4e5f9a98e3914)', crates/nassun/src/client.rs:289:24
Stack:
Error:
at T.A.wbg.__wbg_new_abda76e883ba8a5f (
...
```
cc @zkat | bug,web,papercut :drop_of_blood: | low | Critical |
2,520,759,456 | flutter | Split platform and render threads on Linux | Currently Linux uses the same thread for the platform and render tasks, see `fl_engine.cc`:
```c
FlutterCustomTaskRunners custom_task_runners = {};
custom_task_runners.struct_size = sizeof(FlutterCustomTaskRunners);
custom_task_runners.platform_task_runner = &platform_task_runner;
custom_task_runners.render_task_runner = &platform_task_runner;
```
Other platforms use separate threads. Consider doing this for Linux.
The side effect is `FlutterCompositor.present_view_callback` will be called from a different thead and require synchronization. There will also have to be careful management around resizing as the present may not be in the expected size. | platform-linux,P3,team-linux,triaged-linux | low | Minor |
2,520,762,014 | PowerToys | Screen Ruler | ### Description of the new feature / enhancement
Adding ability to do conversion between pixels and a unit of measure (e.g. instead of showing 110 pixels on screen showing 24 inches)
### Scenario when this would be used?
When looking at drawing or PDF or image instead of manually writing out and converting between a known dimension and pixel count, having the tool do the conversion automatically in the tool.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,520,800,474 | flutter | Add Linux fixture tests | Add tests that have both Dart and native code, as supported on other platform (e.g. Windows). | platform-linux,P3,team-linux,triaged-linux | low | Minor |
2,520,801,085 | flutter | All animations disabled by transition animation scale | ### Steps to reproduce
1. Go to your device's developer options
2. Set transition animation scale to 0x (off)
3. Run any Flutter app (that has been built with v3.24.2)
4. Observe and endure
### Expected results
Typically only animator duration scale will affect ripple effects, so changing the transition animation scale should have no affect
### Actual results
Animator duration scale has no affect, and transition animation scale affects all/most Flutter animations if it's set to 0x, with no in-between
I should note this issue was not present last time I tested a Flutter app (a month or so ago) which leads me to believe it's not entirely related to [this issue](https://github.com/flutter/flutter/issues/153910)
### Code sample
<details open><summary>Code sample</summary>
Simple app to show the lack of ripple
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData(useMaterial3: true),
home: Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
ListTile(title: const Text("Tap"), onTap: () => ()),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: () => (), child: const Icon(Icons.add)),
),
);
}
}
```
</details>
### Screenshots or Video
<details open><summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/f0fb53cc-7527-489f-b3c0-c840f7c51bb1
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.2, on Fedora Linux 40 (KDE Plasma) 6.10.7-200.fc40.x86_64, locale en_US.UTF-8)
• Flutter version 3.24.2 on channel stable at /home/fedora/Development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4cf269e36d (8 days ago), 2024-09-03 14:30:00 -0700
• Engine revision a6bd3f1de1
• Dart version 3.5.2
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /home/fedora/Android/Sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /home/fedora/.local/share/JetBrains/Toolbox/apps/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
• clang version 18.1.6 (Fedora 18.1.6-3.fc40)
• cmake version 3.28.2
• ninja version 1.12.1
• pkg-config version 2.1.1
[✓] Android Studio (version 2024.1)
• Android Studio at /home/fedora/.local/share/JetBrains/Toolbox/apps/android-studio
• Flutter plugin version 81.0.2
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-x64 • Fedora Linux 40 (KDE Plasma) 6.10.7-200.fc40.x86_64
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| framework,a: animation,has reproducible steps,P3,team-android,triaged-android,found in release: 3.24,found in release: 3.26 | low | Major |
2,520,806,947 | vscode | No way to detect/display added/deleted Notebook Metadata in Notebook Diff View | See here for more details https://github.com/microsoft/vscode/issues/228280 | bug,notebook-diff | low | Minor |
2,520,808,719 | godot | Edge case behavior in Project settings search | ### Tested versions
4.3stable
### System information
Windows 10
### Issue description
I'm not sure if this is a bug, or all this behavior is intended.
When using the Project Settings filter search box to search for an advanced setting, and the `Advanced Settings` toggle is `off`:
- The setting is found. **This is good.**
- The text box with the full name of the setting, and the tools to set overrides, is not shown. **I don't know if this is intended.**
- If you select a page and the setting itself, then clear out the search box, the setting disappeared. **This seems bad.** My use case was to search for a setting, then check its page to see if there are related settings that might matter, so I cleared the search box, then the setting I found disappeared.
I don't know what the right behavior for this is, though. The current behavior seems mostly intentional. It's good to surface advanced settings if searched for, even if the toggle is off. It's obviously good to only show advanced settings if the advanced settings toggle is on. But it interacts into a confusing edge case here.
I suppose we could switch on the advanced settings toggle for the user, especially if one of the advanced settings is *changed* during the search. Or we could display any changed advanced settings, even if the toggle is off. But that would create another case of "disappearing settings" if the setting is reverted.
Maybe none of this really matters, since turning on advanced settings is just correct for anyone seriously using the engine.
### Steps to reproduce
Turn off the `Advanced settings` toggle.
Search for an advanced setting, like `Anisotropic Filtering Level`. Navigate to the `Textures` page.

Observe that the setting is found. Observe that the panel with the full text name of the setting and overrides is not shown. Select the setting. Clear the `Filter Settings` search box. Observe that the Textures page is still selected, but now the setting we are looking for is gone.

Turn on the `Advanced settings` toggle. Observe that the panel with the full text name of the setting and overrides is shown:

### Minimal reproduction project (MRP)
N/A | discussion,topic:editor | low | Critical |
2,520,839,384 | vscode | Disabling "Use Tab Stops" should be respected in more indentation contexts | Type: <b>Bug</b>
I work a lot in C / C++ / Fortran code that uses different indentation amounts in different constructs. For example, the body of a function / subroutine might be indented by 2 spaces for the top-level indentation, but other blocks (bodies of conditionals, loops, etc.) are indented by an additional 4 spaces. I'd like to have indentation set at 4 spaces so it works right in most places, and then have VSCode always indent by an additional 4 spaces relative to the previous line, no matter what column the previous line was on, both with the automatic indentation when hitting enter after starting a block, and with use of the tab key. Instead, VSCode seems to always want to indent to a "tab stop" - i.e., a multiple of 4 spaces in an absolute sense from the left margin - in these two situations.
It seems like the "Use Tab Stops" setting is designed to control this behavior. And indeed, disabling this setting leads to correct / expected behavior when running the "Indent Line" command ("Cmd-]"), leading this command to always insert 4 spaces (or whatever the tab size is set to). However, disabling this setting seems to have no impact on automatic indentation or the behavior of the tab key. **It seems that the behavior of both automatic indentation and the tab key should honor "Use Tab Stops" similarly to the "Indent Line" command - i.e., indenting by a fixed amount relative to the previous line (or, stating that differently: the tab key should insert a fixed number of spaces), not based on tab stops.**
I have reproduced this issue in a fresh installation of VSCode Insiders with no extensions (adding the C/C++ extension pack does not change this behavior), where I have changed only two settings: Disabling "Editor: Use Tab Stops" and disabling "Editor: Detect Indentation".
To reproduce this:
(1) From a fresh installation of VSCode, change two settings: Disable "Editor: Use Tab Stops" and disable "Editor: Detect Indentation".
(2) Create a C file with the following contents (note that indentation of the existing content uses 2 spaces):
```C
#include <stdio.h>
int main()
{
int x = 0;
printf("%d\n", x);
if (x == 0) {
}
printf("%d\n", x);
}
```
(3) Change indentation to use spaces and change the tab display size to 4 if not already set that way.
(4) Position the cursor after the `{` at the end of line 9.
(5) Hit "enter". Notice that the cursor is now at column 5; I would expect it to be at column 7 based on these settings of "Use Tab Stops" disabled and a tab display size of 4.
An alternative reproducer is to do the same steps 1-4 but with "Editor: Auto Indent" set to "keep", and then:
(5) Hit "enter". The cursor should be at column 3 (as expected).
(6) Hit "tab". Notice that the cursor is now at column 5; I would expect it to be at column 7 based on these settings of "Use Tab Stops" disabled and a tab display size of 4.
Here are screencasts demonstrating this issue:
First, here is a screencast where "Editor: Auto Indent" is set to "keep". I positioned the cursor after the `{` at the end of line 9, then hit "enter" and then "Cmd-]" to run "Indent Line". This exhibits what I would consider the correct / expected behavior:
https://github.com/user-attachments/assets/d7e9dc97-50be-415f-bf98-18f216003710
Second, here is a screencast where "Editor: Auto Indent" is still set to "keep". I positioned the cursor after the `{` at the end of line 9, then hit "enter" and then "tab". This incorrectly indents to column 5 instead of column 7 (the second version of the reproducer above):
https://github.com/user-attachments/assets/16df9403-e5fb-4423-9d69-e9ac7f466289
Third, here is a screencast where "Editor: Auto Indent" is reverted to its default ("full"). I positioned the cursor after the `{` at the end of line 9, then hit "enter". This again incorrectly indents to column 5 instead of column 7 (the first version of the reproducer above):
https://github.com/user-attachments/assets/b30bd07f-f5bc-4a71-8eb6-b35711f9c6b9
Note that the indentation will later be fixed if a formatter is enabled (e.g., after typing the semicolon at the end of the line if "Format on type" is enabled), but the behavior of auto-indentation and the tab key should honor "Use Tab Stops" without relying on a formatter.
VS Code version: Code - Insiders 1.94.0-insider (Universal) (8b7eb51f54d7e1492d9baf70454ab6547a4ff9df, 2024-09-10T05:04:21.546Z)
OS version: Darwin arm64 23.5.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Pro (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|5, 7, 7|
|Memory (System)|32.00GB (0.05GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details>Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
vsaa593:30376534
py29gd2263:31024238
c4g48928:30535728
a9j8j154:30646983
962ge761:30841072
pythongtdpath:30726887
welcomedialog:30812478
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
2e7ec940:31000449
pythontbext0:30879054
accentitlementst:30870582
dsvsc016:30879898
dsvsc017:30880771
dsvsc018:30880772
cppperfnew:30980852
pythonait:30973460
bdiig495:31013172
a69g1124:31018687
dvdeprecation:31040973
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
nb_pri_only:31057983
nativerepl1:31134653
refactort:31084545
pythonrstrctxt:31093868
flighttreat:31119334
wkspc-onlycs-t:31132770
nativeloc1:31118317
wkspc-ranged-c:31125598
3ad50483:31111987
jh802675:31132134
e80f6927:31120813
ei213698:31121563
```
</details>
<!-- generated by issue reporter --> | bug,editor-autoindent | low | Critical |
2,520,844,502 | pytorch | Torch compile disables denormal floating-point values | I am experimenting with floating-point operations and encountered an issue where, after compiling a function with torch.compile, my denormal values disappear. As seen in the provided code below, some_variable prints 0.0 after using torch.compile.
```python
import torch
some_variable = 1e-310
some_tensor = torch.rand((5,5))
def func(x: torch.Tensor):
return x**2
print("Before compile:", some_variable)
func_compiled = torch.compile(func)
func_compiled(some_tensor)
print("After compile:", some_variable)
```
Prints:
Before compile: 1e-310
After compile: 0.0
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.11.0 (main, Mar 1 2023, 18:26:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.1.74
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 470.256.02
cuDNN version: Probably one of the following:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.6.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 33
Model name: AMD Ryzen 5 5600X 6-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 3700,0000
CPU min MHz: 2200,0000
BogoMIPS: 7399.28
Virtualization: AMD-V
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 3 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 2.1.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @rec | oncall: pt2,module: inductor,oncall: cpu inductor | low | Critical |
2,520,845,868 | pytorch | core_aten_decompositions table have non-functional or CIA entries. | ### 🐛 Describe the bug
core_aten_decompositions table is meant to be used in export context where we should assume every key is a functional op as the table would only be applied after functionalization is run. As a result, any non-functional entries should result in no-op. So we should remove them and implement tighter checks around core_aten_decompositions().
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4 @zou3519 @bdhirsh @yf225 @ezyang | triaged,oncall: pt2,export-triaged,oncall: export,module: pt2-dispatcher | low | Critical |
2,520,918,793 | next.js | Link Component does not prefetch on network recovery. | ### Link to the code that reproduces this issue
https://github.com/refirst11/reproduction-app
### To Reproduce
Sometimes unnecessary mounts are run.
And canary release e2e testing is special.
Let's say the start of the route is "/" and it goes offline in this state.
"/a", "/b", and "/c" are in the viewport at this point, so there will be a smooth transition when transitioning to them.
And it is not possible to transition to links "/d", "/e", and "/f" on pages a, b, and c at this point, but let's say the network is restored, for example in a subway or train tunnel. In that case, when transitioning to pages d, e, and f, a network request will be sent and the page will be reloaded as in the a tag.
I call this mounting on recovery in Next.js.
However, since prefetch only needs to hit when the network restarts,
I thought it would be fixed by adding the following to the dependencies in React.useEffect.
In my environment, when I tried e2e testing next.js,
pnpm test-dev test/e2e/app-dir/app-prefetch/prefetching.test.ts
It's possible that I overlooked this, but all of the test code in the describe block of prefetching.test.ts passes (even if it fails), so I decided to write this as an issue here.
### Current vs. Expected behavior
When the network becomes online, link components in the viewport are prefetched.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Sat Jul 13 00:56:26 PDT 2024; root:xnu-11215.0.165.0.4~50/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 22.8.0
npm: 10.8.2
Yarn: N/A
pnpm: 9.6.0
Relevant Packages:
next: 14.2.5 // There is a newer version (14.2.9) available, upgrade recommended!
eslint-config-next: 13.5.6
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Performance
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
Deploy is with vercel and next start.
I've been using Next.js since version 11, but even back then there was no prefetching during network restarts.
## I want to improve
```ts
// use-network.ts
import { useEffect, useState } from 'react'
export function useNetwork() {
const isClient =
typeof window !== 'undefined' && typeof navigator.onLine === 'boolean'
const readOnlineStatus = isClient && window.navigator.onLine
const [isOnline, setIsOnline] = useState(readOnlineStatus)
useEffect(() => {
function updateOnlineStatus() {
setIsOnline(window.navigator.onLine)
}
window.addEventListener('online', updateOnlineStatus)
window.addEventListener('offline', updateOnlineStatus)
return () => {
window.removeEventListener('online', updateOnlineStatus)
window.removeEventListener('offline', updateOnlineStatus)
}
}, [])
return isOnline
}
```
```ts
// link.tsx
...
const [setIntersectionRef, isVisible, resetVisible] = useIntersection({
rootMargin: '200px',
})
const isOnline = useNetwork()
React.useEffect(() => {
// in dev, we only prefetch on hover to avoid wasting resources as the prefetch will trigger compiling the page.
if (process.env.NODE_ENV !== 'production') {
return
}
if (!router) {
return
}
// If we don't need to prefetch the URL, don't do prefetch.
if (!isVisible || isOnline || !prefetchEnabled) {
return
}
// Prefetch the URL.
prefetch(
router,
href,
as,
{ locale },
{
kind: appPrefetchKind,
},
isAppRouter
)
}, [
as,
href,
isVisible,
locale,
prefetchEnabled,
pagesRouter?.locale,
router,
isAppRouter,
appPrefetchKind,
isOnline,
])
```
## e2e test case
```ts
describe('online/offline transitions', () => {
it('should handle transition from offline to online correctly', async () => {
const browser = await next.browser('/static-page')
let requests = []
browser.on('request', (req) => {
requests.push(new URL(req.url()).pathname)
})
// prefetch wait time
await waitFor(1000)
await browser.eval('navigator.onLine = false')
// Link component "/" click
// and request dashboard.
await browser
.elementByCss('#to-home')
.click()
.waitForElementByCss('#to-dashboard')
await browser.eval('navigator.onLine = true')
// prefetch wait time
await waitFor(1000)
expect(requests.filter((req) => req.includes('/dashboard')).length).toBe(0)
await waitFor(1000)
const before = Date.now()
await browser
.elementByCss('#to-dashboard')
.click()
.waitForElementByCss('#dashboard-layout')
const after = Date.now()
const timeToComplete = after - before
// Ensure the dashboard page is prefetched
expect(timeToComplete).toBeLessThan(20)
})
})
```
| create-next-app,bug,Performance | low | Major |
2,520,927,826 | realworld | [Bug]: articles api is broken | ### Relevant scope
Backend specs
### Description
not sure if the specs changed.
open: https://demo.realworld.how/
500 internal server error:
"\nInvalid `prisma.article.count()` invocation:\n\n\nError occurred during query execution:\nConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(PostgresError { code: \"26000\", message: \"prepared statement \\\"s1555\\\" does not exist\", severity: \"ERROR\", detail: None, column: None, hint: None }), transient: false })"
| bug | low | Critical |
2,520,950,477 | material-ui | [TextareaAutosize] Support max height | It should be possible to set a max height on a TextareaAutosize and then scroll after it reaches that height. But the component hides overflow in the style tag, so it’s impossible.
**Search keywords**:
**Search keywords**: | bug 🐛 | low | Minor |
2,520,950,812 | material-ui | [TextareaAutosize] Deprecate TextareaAutosize | ### What's the problem? 🤔
Today's implementation of TextareaAutosize depends, more or less, on option 1 of https://stackoverflow.com/questions/454202/creating-a-textarea-with-auto-resize, it's more complex because it also supports `maxRows` and `minRows`.
Looking at https://github.com/tkent-google/explainers/blob/main/form-sizing.md, it looks like a native solution for this component is coming to the platform. Also see https://twitter.com/jh3yy/status/1710398436917321799.
It looks like we will no longer need this component in a year or two. Time to think of its deprecation path?
**Search keywords**:
**Search keywords**: | on hold,deprecation,component: TextareaAutosize | low | Major |
2,520,950,975 | material-ui | [TextareaAutosize] Unstable height when rendered in a Next.js RSC page | ### Steps to reproduce 🕹
Live example: https://codesandbox.io/p/sandbox/https-github-com-mui-material-ui-issues-38607-5h5ndr?file=%2Fsrc%2Fapp%2Fpage.tsx%3A22%2C1
Unstable height TextField multiline on nextjs app router when i reload the page.
Same thing happens on page first load.
For bug reproduction, i used material-ui-nextjs-ts from the example folder.
Video below.
https://github.com/mui/material-ui/assets/57659794/a37ddb07-246a-478e-b360-73e6518836b9
### Current behavior 😯
_No response_
### Expected behavior 🤔
_No response_
### Context 🔦
_No response_
### Your environment 🌎
<details>
<summary>Expand</summary>
System:
OS: Windows 10 10.0.19044
Binaries:
Node: 16.16.0 - C:\Program Files\nodejs\node.EXE
Yarn: Not Found
npm: 8.11.0 - C:\Program Files\nodejs\npm.CMD
Browsers:
Chrome: 116.0.5845.97
Edge: Spartan (44.19041.1266.0), Chromium (116.0.1938.54)
npmPackages:
@emotion/react: latest => 11.11.1
@emotion/styled: latest => 11.11.0
@mui/base: 5.0.0-beta.11
@mui/core-downloads-tracker: 5.14.5
@mui/icons-material: latest => 5.14.3
@mui/material: latest => 5.14.5
@mui/private-theming: 5.14.5
@mui/styled-engine: 5.13.2
@mui/system: 5.14.5
@mui/types: 7.2.4
@mui/utils: 5.14.5
@types/react: latest => 18.2.21
react: 18.x => 18.2.0
react-dom: 18.x => 18.2.0
typescript: latest => 5.1.6
</details>
**Search keywords**:
**Search keywords**: | bug 🐛,component: TextareaAutosize,regression | low | Critical |
2,520,979,819 | go | net/http: Unexpected "define Request.GetBody to avoid this error" error after setting Request.GetBody | ### Go version
go version go1.23.0 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/xxx/Library/Caches/go-build'
GOENV='/Users/xxx/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
```
### What did you do?
I have not been able to make a stand alone example, but will walk through the exact code path that is triggered.
The offending code is as follows:
```go
go func(client *http.Client){
dataToPost := []byte("somedata")
// if we don't set the body here, the first attempt seems send with an empty body even though GetBody is set
req, err := http.NewRequest(http.MethodPost, "http://127.0.0.1/someapi", bytes.NewBuffer(dataToPost))
if err != nil {
log.Fatalln("failed to create request!")
}
req.GetBody = func() (io.ReadCloser, error) {
return io.NopCloser(bytes.NewBuffer(dataToPost)), nil
}
res, err := client.Do(req)
if err != nil {
log.Fatal(err)
}
}(someSharedClient)
```
Which under very particular circumstances, the call to `client.Do` returns the error `http2: Transport: cannot retry err [http2: Transport received Server's graceful shutdown GOAWAY] after Request.Body was written; define Request.GetBody to avoid this error`, even though `GetBody` has been set.
To get here, the request must be redirected followed by a [retry-able](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/h2_bundle.go#L7821) HTTP/2 error such as GOAWAY. (I believe it also may require a cached HTTP/2 connection from a previous call).
How I managed this was the following setup:
`Load generating go program (the one getting the error) with at least two go routines sending on the same client` <-> `nginx minikube ingress` <-> `some kubernetes service`
The code path is as follows:
1. The first attempt to send the request gets redirected to https (with support for HTTP/2), entering the second iteration of [this](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/client.go#L636) loop. I did not determine if a redirect from an HTTP/2 endpoint to another HTTP/2 endpoint would cause the same errors.
2. Since the first loop added the request to the reqs slice [here](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/client.go#L722) we enter [this](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/client.go#L639) if statement.
3. Notice the `GetBody` function does not get set when the original request is copied [here](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/client.go#L661C1-L678C5).
4. The new request enters the http2 round tripper [here](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/transport.go#L563).
5. The second attempt to send the request is met by a retry-able error [here](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/h2_bundle.go#L7739) (in my case it appears to be a GOAWAY sent by nginx after some number of requests as [described](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests)), and `http2shouldRetryRequest` is [called](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/h2_bundle.go#L7742).
6. Since `GetBody` was not copied on the redirect, [this](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/h2_bundle.go#L7799C1-L7809C3) section fails to copy the body, and we reach the resulting error [here](https://github.com/golang/go/blob/a74951c5af5498db5d4be0c14dcaa45fb452e23a/src/net/http/h2_bundle.go#L7818).
### What did you see happen?
I received an error telling me to set `GetBody` to avoid said error, even though `GetBody` had in fact been set.
### What did you expect to see?
I would expect either the `GetBody` function pointer to be copied on redirect, or the error returned to better reflect the actual error condition. | NeedsInvestigation | low | Critical |
2,520,997,311 | go | proposal: html/template: add a formatter | ### Proposal Details
Go was a trailblazer of standardized, no-knobs code formatting. Yet one of the major components of the standard library--one in which I've personally written tens of thousands of lines of code--has no usable formatter. (There's one abandoned open source one that has showstopper bugs. And prettier is oblivious to the Go templating code, with predictable results.)
The surface area would be:
* a tool, similar to go fmt, that formats an entire file
* a single API that accepts a string and returns a string and an error, for use e.g. by gopls and linters (e.g. for checking templates stored in literal strings)
I propose that the details of the formatting be left up to the Go team (or whoever steps up to do the implementation), maybe after a preference-gathering round. I care about the details...but I care WAY more that there be a single unified vision behind it and that it be standardized.
| Proposal | medium | Critical |
2,521,026,631 | go | build: build failure on x_tools-go1.23-linux-amd64_c2s16-perf_vs_gopls_0_11 [consistent failure] | ```
#!watchflakes
default <- builder == "x_tools-go1.23-linux-amd64_c2s16-perf_vs_gopls_0_11" && repo == "tools" && mode == "build"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8737164931658588881)):
go: downloading github.com/BurntSushi/toml v1.0.0
go: downloading github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
2024/09/10 16:54:16 Load average: 0.76 0.16 0.05 1/399 2513451
2024/09/10 16:54:16 Waiting for load average to drop below 0.20...
2024/09/10 16:54:46 Load average: 0.53 0.16 0.06 1/399 2513451
2024/09/10 16:54:46 Waiting for load average to drop below 0.20...
2024/09/10 16:55:16 Load average: 0.32 0.15 0.05 1/399 2513451
2024/09/10 16:55:16 Waiting for load average to drop below 0.20...
2024/09/10 16:55:46 Load average: 0.19 0.13 0.05 1/401 2513451
2024/09/10 16:55:46 Running sub-repo benchmarks for tools
...
go: downloading golang.org/x/sync v0.8.0
go: downloading golang.org/x/vuln v1.0.4
go: downloading honnef.co/go/tools v0.4.7
go: downloading mvdan.cc/gofumpt v0.6.0
go: downloading mvdan.cc/xurls/v2 v2.5.0
go: downloading golang.org/x/text v0.18.0
go: downloading golang.org/x/exp/typeparams v0.0.0-20221212164502-fae10dda9338
2024/09/10 18:53:05 Error running subrepo tests: error running sub-repo tools benchmark "baseline" with toolchain baseline in dir /home/swarming/.swarming/w/ir/x/w/tools/gopls/internal/test/integration/bench: exit status 1
2024/09/10 18:53:05 FAIL
exit status 1
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,521,040,781 | material-ui | [pigment] Exception when importing xxxClasses from @mui/material barrel index | ### Steps to reproduce
Link to live example: https://codesandbox.io/p/devbox/lingering-shape-s9cvl7?file=%2Fsrc%2Fapp%2Fpage.tsx
Steps:
1. Use `styled` from `@mui/material-pigment-css`
2. Try styling component that can be styled with imported MUI classes
```tsx
import { styled } from "@mui/material-pigment-css";
import { Button, buttonClasses } from "@mui/material";
export const StyledButton = styled(Button)(({ theme }) => ({
[theme.breakpoints.up("md")]: {
[`& > .${buttonClasses.startIcon}`]: {
display: "none",
},
},
}));
```
### Current behavior
NextJS fails to build and re-throws an exception from pigment
```
⨯ unhandledRejection: TypeError: /project/sandbox/src/app/page.tsx: Cannot read properties of undefined (reading 'startIcon')
at /project/sandbox/src/app/page.tsx:16:153
at StyledProcessor.processCss (/project/sandbox/node_modules/@pigment-css/react/processors/styled.js:373:59)
at StyledProcessor.processStyle (/project/sandbox/node_modules/@pigment-css/react/processors/styled.js:306:31)
at /project/sandbox/node_modules/@pigment-css/react/processors/styled.js:209:12
at Array.forEach (<anonymous>)
at StyledProcessor.build (/project/sandbox/node_modules/@pigment-css/react/processors/styled.js:208:20)
at /project/sandbox/node_modules/@wyw-in-js/transform/lib/plugins/collector.js:26:17
at /project/sandbox/node_modules/@wyw-in-js/transform/lib/utils/getTagProcessor.js:384:5
at Array.forEach (<anonymous>)
at applyProcessors (/project/sandbox/node_modules/@wyw-in-js/transform/lib/utils/getTagProcessor.js:375:10)
```
### Expected behavior
Pigment should be able to successfully build styles using class names just like it does with colors for instance
```tsx
import { red } from "@mui/material/colors";
export const StyledButton = styled(Button)(({ theme }) => ({
color: red[500], // no error
}));
```
### Context
Migrating to V6 + pigment
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: Linux 6.1 Ubuntu 20.04.6 LTS (Focal Fossa)
Binaries:
Node: 20.12.1 - /home/codespace/nvm/current/bin/node
npm: 10.5.0 - /home/codespace/nvm/current/bin/npm
pnpm: 8.15.6 - /home/codespace/nvm/current/bin/pnpm
Browsers:
Chrome: Not Found
npmPackages:
@emotion/react: 11.13.3
@emotion/styled: 11.13.0
@mui/core-downloads-tracker: 6.0.0
@mui/icons-material: 6.1.0 => 6.1.0
@mui/material: latest => 6.0.0
@mui/material-pigment-css: latest => 6.0.0
@mui/private-theming: 6.0.0
@mui/styled-engine: 6.0.0
@mui/system: 6.0.0
@mui/types: 7.2.16
@mui/utils: 6.0.0
@pigment-css/nextjs-plugin: latest => 0.0.20
@pigment-css/react: 0.0.20
@pigment-css/unplugin: 0.0.20
@types/react: latest => 18.3.4
react: latest => 18.3.1
react-dom: latest => 18.3.1
typescript: latest => 5.5.4
```
</details>
__
**Search keywords**: pigment, pigment-css, buttonClasses | regression,package: pigment-css,v6.x migration | low | Critical |
2,521,074,994 | ollama | Model update history on ollama.com | ### Would be nice if we can see what has been updated for a model on the ollama.com

| feature request | low | Major |
2,521,119,824 | stable-diffusion-webui | [Bug]: Can not get result images to keep input image name/preview img bugged | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
So, i am not sure if i am just missing a setting, or what, but while using the "Extras menu" when i am batch upscaling, The images have a different name then the input images. This means hours of renaming files... if this DOES NOT exist, is it something maybe that could be added? I would be suprised such a small feature wouldnt have been incorperated, which is why i think im doing something wrong... or that there is a bug?
then the second issue, which is deff a bug, if you reuse the seed from a prior generation the preview img wont update.
so defff 1 bug, if not 2. I do texture packs, so the file names i put IN to Forge, are very important that they spit out the same name.. even if it had a pre or suffix, i could bulk rename to remove thoes..
### Steps to reproduce the problem
1. img2img, ste to -1 seed,
2. Generate.
3 copy seed from file, and use it in UI for next imgs
4. every img you generate with this seed until yuo close and restart, will show the same img in preview on web UI, but in folder the img looks right.
### What should have happened?
i should see the new generated img
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-09-12-01-26.json](https://github.com/user-attachments/files/16972063/sysinfo-2024-09-12-01-26.json)
### Console logs
```Shell
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-421-g59dd981f
Commit hash: 59dd981fa78b767a9973a8cd1d555e3cb851c62b
Launching Web UI with arguments:
Total VRAM 8192 MB, total RAM 32530 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2070 : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: False
F:\Raw Data Set Imgs\webui_forge_cu121_torch231\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: F:\Raw Data Set Imgs\webui_forge_cu121_torch231\webui\models\ControlNetPreprocessor
2024-09-11 19:41:49,362 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\Stable-diffusion\\cyberrealistic25D_v10.safetensors', 'hash': '2052b672'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 53.3s (prepare environment: 13.2s, import torch: 22.0s, initialize shared: 0.4s, other imports: 2.5s, list SD models: 0.1s, load scripts: 5.7s, create ui: 6.7s, gradio launch: 3.0s).
Environment vars changed: {'stream': False, 'inference_memory': 2259.0, 'pin_shared_memory': False}
[GPU Setting] You will use 72.42% GPU memory (5932.00 MB) to load weights, and use 27.58% GPU memory (2259.00 MB) to do matrix computation.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-wdn-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-wdn-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
Environment vars changed: {'stream': False, 'inference_memory': 2259.0, 'pin_shared_memory': False}
[GPU Setting] You will use 72.42% GPU memory (5932.00 MB) to load weights, and use 27.58% GPU memory (2259.00 MB) to do matrix computation.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
Loading Model: {'checkpoint_info': {'filename': 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\Stable-diffusion\\cyberrealistic25D_v10.safetensors', 'hash': '2052b672'}, 'additional_modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ...
StateDict Keys: {'unet': 686, 'vae': 248, 'text_encoder': 197, 'ignore': 0}
F:\Raw Data Set Imgs\webui_forge_cu121_torch231\system\python\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float16}
Model loaded in 21.4s (unload existing model: 0.2s, forge model load: 21.1s).
To load target model IntegratedAutoencoderKL
Begin to load 1 model
[Unload] Trying to free 2673.85 MB for cuda:0 with 0 models keep loaded ...
[Memory Management] Current Free GPU Memory: 7091.00 MB
[Memory Management] Required Model Memory: 319.11 MB
[Memory Management] Required Inference Memory: 2259.00 MB
[Memory Management] Estimated Remaining GPU Memory: 4512.89 MB
Moving model(s) has taken 0.11 seconds
To load target model JointTextEncoder
Begin to load 1 model
[Unload] Trying to free 2564.14 MB for cuda:0 with 0 models keep loaded ...
[Unload] Current free memory is 6780.59 MB ...
[Memory Management] Current Free GPU Memory: 6780.59 MB
[Memory Management] Required Model Memory: 234.72 MB
[Memory Management] Required Inference Memory: 2259.00 MB
[Memory Management] Estimated Remaining GPU Memory: 4286.87 MB
Moving model(s) has taken 0.08 seconds
[Unload] Trying to free 2259.00 MB for cuda:0 with 1 models keep loaded ...
[Unload] Current free memory is 6436.31 MB ...
token_merging_ratio = 0.5
To load target model KModel
Begin to load 1 model
[Unload] Trying to free 4390.23 MB for cuda:0 with 0 models keep loaded ...
[Unload] Current free memory is 6436.02 MB ...
[Memory Management] Current Free GPU Memory: 6436.02 MB
[Memory Management] Required Model Memory: 1639.41 MB
[Memory Management] Required Inference Memory: 2259.00 MB
[Memory Management] Estimated Remaining GPU Memory: 2537.61 MB
Moving model(s) has taken 0.56 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.97it/s]
[Unload] Trying to free 2259.00 MB for cuda:0 with 1 models keep loaded ...████████████| 20/20 [00:04<00:00, 4.72it/s]
```
### Additional information
_No response_ | bug-report | low | Critical |
2,521,169,936 | ollama | OLLAMA_FLASH_ATTENTION regression on 0.3.10? | ### What is the issue?
After upgrading to the latest version `0.3.10`, with `OLLAMA_FLASH_ATTENTION=1` set in env, seemed the tokens per second were halved, in my experiment, same code used to have tps around 23 and now it's only 11.
Wondering is there any known regression with regard to FLASH_ATTENTION?
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.10 | bug | low | Minor |
2,521,213,330 | flutter | VoiceOver rotor actions limited to text fields with active keyboard focus | ### Steps to reproduce
As I mentioned in #151029 :
Suppose there are two text fields on an interface.
1. Double-tap to activate the first text field, and the keyboard pops up.
2. Then, use VoiceOver to focus on the second text field. Note: just focus on it, don't double-tap.
3. Finally, use the rotor to perform a paste operation.
Native behavior: The text will be pasted into the second text field.
Flutter behavior: Unable to perform the paste operation.
In other words, the native rotor operation targets the text field that VoiceOver is focused on.
However, Flutter rotor operation only targets the text field that has the keyboard activated.
Please use the latest master version for testing.
### Expected results
The text will be pasted into the second text field.
### Actual results
Unable to perform the paste operation.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: Text('Two Text Fields')),
body: Column(
children: [
TextField(),
TextField(),
],
),
),
);
}
}
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Flutter (Channel master, 3.26.0-1.0.pre.97
```
</details>
| a: text input,platform-ios,a: accessibility,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.26 | low | Major |
2,521,266,861 | pytorch | Can we remove @torch._dynamo_disable from torch.utils.checkpoint.checkpoint? | ### 🚀 The feature, motivation and pitch
`torch.utils.checkpoint.checkpoint` has a strong limitation: there must be no graph break, and if it can't be satisfied, it will fall back to eager with this decorator. then the performance is too bad.
### Alternatives
in my local test, I removed this decorator and enabled compiled_autograd (with a fix for deadlock), the performance for llama-70b finetune boosted about +35%.
### Additional context
llama-70b finetune in optimum using deepspeed zero3, which registers several hooks in the module, these hooks can't be traced by dynamo, and it's hard to fix these graph breaks in a short time.
and it call `checkpoint` function in `DecoderLayer.__call__`, so all 80 DecoderLayer fall back to eager.
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,needs design,oncall: pt2,module: dynamo | low | Major |
2,521,267,705 | next.js | Middleware: Cannot export config after declaration in export list format | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/nameless-hill-dt239f
### To Reproduce
Link to codesandbox is provided and it shows the issue as well.
### Current vs. Expected behavior
The config object from `middleware.ts` is loaded correctly in case 1 (exporting during declaration) while it is not loaded correctly in case 2 (exporting after decleration):
```
import { NextRequest, NextResponse } from 'next/server';
// case 1 -> works as expected
export const middleware = (request: NextRequest) => {
console.log(request.nextUrl.pathname);
return NextResponse.next();
};
export const config = {
matcher: '/about',
};
// case 2 -> does not work as expected
const middleware = (request: NextRequest) => {
console.log(request.nextUrl.pathname);
return NextResponse.next();
};
const config = {
matcher: '/about',
};
export { middleware, config };
```
When the config is loaded only paths with `/about` should be printed in the console. If not loaded, all paths are printed in the console. Also no error is reported (I am not sure if an error should be reported).
### Provide environment information
```bash
Sandbox is running on next.js version 15.0.0-canary-148. I encountered the same issue locally on 15.0.0-rc.0. Used app router in both instances.
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
Exporting the config object while declaring it solves the issue for me, but the difference should only be one of syntax and not functionality. The `middleware` function is unaffected by this issue, only the `config` object is affected.
This issue might be related but I'm not completely sure: https://github.com/vercel/next.js/issues/67169
In the case that it is feel free to remove this issue/combine both issues. | bug,Middleware | low | Critical |
2,521,277,126 | rust | Write down where all the unwind/backtrace code is | ### Location
It's in at least three different locations, depending on how you count! That's the problem!
### Summary
Internal-facing documentation, mostly?
The problem is this, essentially: we have our internal unwinding/backtrace-related code in
- library/backtrace
- library/panic_abort
- library/panic_unwind
- library/std/src/sys/personality
- library/unwind
And the interrelationships are not so clean and obvious that you can simply follow imports, because sometimes the bridge is performed by raw compiler magic! Because, in fact, you do generally need compiler support to get a decent backtrace or to unwind correctly! So arguably I could also mention files like:
- rustc_codegen_ssa/src/mir/block.rs
Understanding all of these should not be the exclusive province of a few wizened sages of the dark arts. They should all backref to an index that tracks them. And if any can be eliminated and merged into others without causing horrid problems, they should be. | A-runtime,C-enhancement,T-compiler,A-docs,T-libs,E-tedious,A-backtrace | low | Minor |
2,521,284,204 | flutter | VoiceOver Rotor Operations Lack Feedback | ### Steps to reproduce
As I mentioned in #151029.
When using VoiceOver's rotor operations to perform actions such as paste, delete, etc., there is a lack of feedback after the operation is completed.
For example, with the native text fields, after pasting, it should read out the pasted text.
After selecting all, it should read out "XX has been selected."
However, in Flutter, there is no feedback information at all.
Please use the latest master version for testing.
### Expected results
There should be corresponding feedback information after each operation.
### Actual results
There is no feedback information at all.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: const Text('VoiceOver Test')),
body: const Center(
child: Padding(
padding: EdgeInsets.all(16.0),
child: TextField(
decoration: InputDecoration(
hintText: 'Enter text here',
),
),
),
),
),
);
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Flutter (Channel master, 3.26.0-1.0.pre.97
```
</details>
| platform-ios,a: accessibility,has reproducible steps,P2,team-accessibility,triaged-accessibility,found in release: 3.26 | low | Major |
2,521,314,037 | neovim | `nvim_buf_set_lines` doesn’t trigger fold update for injected TS parsers | ### Problem
`nvim_buf_set_lines` doesn’t trigger injected tree-sitter parsers to update the fold ranges.
This only happens for injected tree-sitter parsers (not the main one attached to the buffer) when `nvim_buf_set_lines` is called before `BufWinEnter` event.
### Steps to reproduce
**minimal.lua**
```lua
vim.o.foldmethod="expr"
vim.o.foldexpr="v:lua.vim.treesitter.foldexpr()"
local bufnr = vim.api.nvim_create_buf(false, false)
vim.bo[bufnr].filetype = "lua"
local lines = {
"vim.cmd([[",
[[function test()]],
[[ echom "hello"]],
[[endfunction]],
"]])",
}
-- set_lines from here doesn't update the injected parsers
vim.api.nvim_buf_set_lines(bufnr, 0, -1, false, lines)
vim.api.nvim_win_set_buf(0, bufnr)
-- set_lines from here updates the injected parsers
-- vim.api.nvim_buf_set_lines(bufnr, 0, -1, false, lines)
```
- `nvim --clean -u minimal.lua`
- See the `function … endfunction` is not folded
### Expected behavior
`nvim_buf_set_lines` should trigger fold updates for all injected parsers.
### Neovim version (nvim -v)
0.10.1
### Vim (not Nvim) behaves the same?
no, this is tree-sitter issue
### Operating system/version
ubuntu 22.04
### Terminal name/version
blink shell
### $TERM environment variable
xterm-256color
### Installation
build from repo | bug,api,treesitter | low | Minor |
2,521,337,993 | go | archive/zip: improve Zip64 compatibility with 7z | ### Go version
go1.23.1
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/sangmin5.lee/.cache/go-build'
GOENV='/home/sangmin5.lee/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/sangmin5.lee/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/sangmin5.lee/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/sangmin5.lee/dev/go/goroot'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/sangmin5.lee/dev/go/goroot/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/sangmin5.lee/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build4058537502=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
* How to reproduce this issue
** Prepare two files with each size of 5G, 1M
$ touch test_5G
$ shred -n 1 -s 5G test_5G
$ touch test_1M
$ shred -n 1 -s 1M test_1M
** Create zipfile to have 5G file
zipfile, err := os.Create("s5G.zip")
zipWriter := zip.NewWriter(zipfile)
newfile, err := os.Open("test_5G")
fileInfo, err := newfile.Stat()
header, err := zip.FileInfoHeader(fileInfo)
header.Name = "test_5G"
header.Method = zip.Deflate
writer, err := zipWriter.CreateHeader(header)
_, err = io.Copy(writer, newfile)
** Get 7z from https://sourceforge.net/projects/sevenzip/files/7-Zip/23.01/ or higher
and try to add 1M file to created zip
$ 7zz a s5G.zip test_1M
### What did you see happen?
7-Zip (z) 23.01 (x86) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20
32-bit ILP32 locale=en_US.utf8 Threads:96 OPEN_MAX:131072, ASM
Open archive: s5G.zip
WARNINGS:
Headers Error
--
Path = s5G.zip
Type = zip
WARNINGS:
Headers Error
Physical Size = 5370519708
64-bit = +
Characteristics = Zip64
Scanning the drive:
1 file, 1048576 bytes (1024 KiB)
Updating archive: s5G.zip
Keep old data in archive: 1 file, 5368709120 bytes (5120 MiB)
Add new data to archive: 1 file, 1048576 bytes (1024 KiB)
System ERROR:
E_NOTIMPL : Not implemented
### What did you expect to see?
Everything is OK without errs and the contents should be listed
$ unzip -l 5G.zip
Archive: 5G.zip
Length Date Time Name
--------- ---------- ----- ----
1048576 2024-09-11 07:47 test_1M
5368709120 2024-09-11 07:50 test_5G
--------- -------
5369757696 2 files
| NeedsInvestigation | low | Critical |
2,521,359,103 | godot | Functions passed as arguments cannot be called with `()` operator, but error message is insufficiently clear | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Windows 11
### Issue description
``` gdscript
# The following does not work. Error: Function "call_me()" not found in base self.
var fn1 = func(call_me): call_me()
# No error. It refers to the local variable "call_me" instead of self. That's correct.
var fn2 = func(call_me): print(call_me)
# Workaround: Use call or wrap it in Callable. Not very comfortable.
var fn3 = func(call_me): call_me.call()
var fn4 = func(call_me): Callable(call_me).call()
```
I expected that the local variable (parameter) "call_me" is used. But it refers to self? Use local variable first, when fallback to class self and global. It seems to be a bug or something that can be changed.
### Steps to reproduce
Code is worth a thousand words. Please see above.
### Minimal reproduction project (MRP)
Please paste the code block above in your Godot editor. | enhancement,topic:gdscript | low | Critical |
2,521,391,687 | rust | Tracking issue for future-incompatibility lint `unsupported_fn_ptr_calling_conventions` | This is the **summary issue** for the `unsupported_fn_ptr_calling_conventions` future-compatibility warning. The goal of this page is describe why this change was made and how you can fix code that is affected by it. It also provides a place to ask questions or register a complaint if you feel the change should not be made. For more information on the policy around future-compatibility warnings, see our [breaking change policy guidelines](https://rustc-dev-guide.rust-lang.org/bug-fix-procedure.html#tracking-issue-template).
### What is the warning for?
The `unsupported_fn_ptr_calling_conventions` lint is output whenever there is a use of a target dependent calling convention on a target that does not support this calling convention on a function pointer.
For example `stdcall` does not make much sense for a x86_64 or, more apparently, powerpc code, because this calling convention was never specified for those targets.
### Example
```rust,ignore (needs specific targets)
fn stdcall_ptr(f: extern "stdcall" fn ()) {
f()
}
```
This will produce:
```text
warning: use of calling convention not supported on this target on function pointer
--> $DIR/unsupported.rs:34:15
|
LL | fn stdcall_ptr(f: extern "stdcall" fn()) {
| ^^^^^^^^^^^^^^^^^^^^^^^^
|
= warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
= note: for more information, see issue #130260 <https://github.com/rust-lang/rust/issues/130260>
= note: `#[warn(unsupported_fn_ptr_calling_conventions)]` on by default
```
### Explanation
On most of the targets the behavior of `stdcall` and similar calling conventions is not defined at all, but was previously accepted due to a bug in the implementation of the compiler.
### Recommendations
Use `#[cfg(…)]` annotations to ensure that the ABI identifiers are only used in combination with targets for which the requested ABI is well specified.
### When will this warning become a hard error?
At the beginning of each 6-week release cycle, the Rust compiler team will review the set of outstanding future compatibility warnings and nominate some of them for Final Comment Period. Toward the end of the cycle, we will review any comments and make a final determination whether to convert the warning into a hard error or remove it entirely.
Implemented in #128784
See also #87678 for the similar `unsupported_calling_conventions` lint. | A-lints,T-compiler,C-future-incompatibility,C-tracking-issue,A-ABI | low | Critical |
2,521,433,169 | ui | [bug]: vertical separator not rendering | ### Describe the bug
When using the separator with a "vertical" orientation it does not render properly. Looking at the browser dev tools it appears to have a size of 1x0 pixels.
Changing the default style here to use `min-h-full` instead of just `h-full` seems to fix this.
Seems to be the same as this [issue](https://github.com/huntabyte/shadcn-svelte/issues/1149) encountered in the shadcn-svelte port
### Affected component/components
Separator
### How to reproduce
1. Place a Separator in a flex parent with two other components and orientation "vertical"
```javascript
<div className="flex flex-row flex-nowrap gap-4 h-fit">
<p>Foo</p>
<Separator orientation="vertical" />
<p>Bar</p>
</div>
```
2. The separator will not appear
3. Change style within components/ui/separator.tsx
```javascript
<SeparatorPrimitive.Root
ref={ref}
decorative={decorative}
orientation={orientation}
className={cn(
'shrink-0 bg-border',
orientation === 'horizontal' ? 'h-[1px] w-full' : 'min-h-full w-[1px]',
className
)}
{...props}
/>
```
4. Separator will now render
### Codesandbox/StackBlitz link
https://stackblitz.com/edit/stackblitz-starters-t7h9sc?file=components%2Fui%2Fseparator.tsx
### Logs
_No response_
### System Info
```bash
M2 Mac,
Firefox
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,521,433,539 | rust | Diagnostic on missing "?" suggests to ignore the error entirely, which can be problematic | ### Code
```Rust
fn fallible() -> Result<i32, ()> {
Ok(42)
}
pub fn caller() -> Result<(), ()> {
fallible();
Ok(())
}
```
### Current output
```Shell
warning: unused `Result` that must be used
--> src/lib.rs:6:5
|
6 | fallible();
| ^^^^^^^^^^
|
= note: this `Result` may be an `Err` variant, which should be handled
= note: `#[warn(unused_must_use)]` on by default
help: use `let _ = ...` to ignore the resulting value
|
6 | let _ = fallible();
| +++++++
```
### Desired output
```Shell
Rust should suggest to add the missing `?`, not to ignore the error entirely.
```
### Rationale and extra context
See https://github.com/rust-lang/miri/issues/3855 for an example where ignoring the error with `let _ =` can lead to critical bugs.
### Other cases
_No response_
### Rust Version
```Shell
1.81.0
```
### Anything else?
_No response_
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"roife"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-diagnostics,T-compiler | low | Critical |
2,521,442,269 | pytorch | (L)-BFGS with Hager-Zhang line-search | ### 🚀 The feature, motivation and pitch
I am working on PINNs (and variations of them) and it is often the case that BFGS performs better than its limited memory version. However, there is no official implementation of BFGS in Pytorch, only the limited memory version is available. Moreover, the Hager-Zhang line-search is often more effective than `strong-wolfe` when you get close to machine precision, as one would hope when solving PDEs.
I have a self-implemented version of BFGS and Hager-Zhang line-search. However, I am not good at writing well-factored code, tests or documentation. Anyone willing to help to add this feature, starting from my code?
This is partially related to: #55279, #80553
### Alternatives
I considered using [pytorch-minimize](https://github.com/rfeinman/pytorch-minimize), but BFGS is not efficiently implemented and Hager-Zhang line-search is not available.
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | module: optimizer,triaged | low | Minor |
2,521,472,037 | TypeScript | Fail to infer Indexed access to the intersection type when it includes an union derived from a generic input | ### 🔎 Search Terms
"indexed access on union", "generic index"
### 🕗 Version & Regression Information
5.6.2, and it seems never worked for every ts versions.
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.2#code/C4TwDgpgBAKhDOwA8MoQB7AgOwCbykQCcBLbAcwD4oBeKAbwG0BpKM2AXQC4GBrCED2YBfYYxgcoAMgbAAhkXIRgPYmXLCAUJoDGAe2yIoAMz17aUFGkw58hYKQqUAFAvI84iFJQCUtavSaUMFQ+obAUABucgA2AK4QHgjIMNR0bgDcQSFhRtHxEABMSV6pjADk-CDlkumKFVU1WSGhBnmxCQDMJSmUFfKKyjUWbv1uQxwZUAD001AAokREekSawkA
### 💻 Code
```ts
type Test<T extends string> = {[K in T]: {key: K}}[T] & {target: string}
const foo = <T extends string>(arg: Test<T>) => {
const value: Test<T> = arg;
const value2: Test<T>['key'] = arg['key'];
const value3: Test<T>['target'] = arg['target']; // Error
}
```
### 🙁 Actual behavior
The declaration of `value` and `value2` works well, but ts emit an error saying `Type 'string' is not assignable to type '{ [K in T]: { key: K; }; }[T]["target"] & string'` for `value3`.
### 🙂 Expected behavior
The `Test<T>['target']` should equal to `string` and assume the type of `'{ [K in T]: { key: K; }; }[T]["target"] ` as `any` before intersection, as it does when I give non-generic type in the same place. It seems even weird because it well infers the type of `Test<T>['key']`.
### Additional information about the issue
With additional experiment, I found that it always happens when I generate an union type with generic input. For instance,
```ts
type Test<T> = (T extends string ? {key: number} : {key: string}) & {target: string}
const foo = <T extends string>(arg: Test<T>) => {
const value: Test<T> = arg;
const value2: Test<T>['key'] = arg['key'];
const value3: Test<T>['target'] = arg['target']; // Error
}
```
it also does not work in the case above | Help Wanted,Possible Improvement | low | Critical |
2,521,539,244 | pytorch | torch ones, zeros and empty operators with out= kwarg: SymIntArrayRef expected to contain only concrete integers | ### 🐛 Describe the bug
torch ones, zeros and empty operators with out argument when used in torch.compile mode, reports error: SymIntArrayRef expected to contain only concrete integers
### Error logs
```
E0912 06:51:36.997000 1244 torch/_subclasses/fake_tensor.py:2017] [0/1] failed while attempting to run meta for aten.ones.out
E0912 06:51:36.997000 1244 torch/_subclasses/fake_tensor.py:2017] [0/1] Traceback (most recent call last):
E0912 06:51:36.997000 1244 torch/_subclasses/fake_tensor.py:2017] [0/1] File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 2013, in _dispatch_impl
E0912 06:51:36.997000 1244 torch/_subclasses/fake_tensor.py:2017] [0/1] r = func(*args, **kwargs)
E0912 06:51:36.997000 1244 torch/_subclasses/fake_tensor.py:2017] [0/1] File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 716, in __call__
E0912 06:51:36.997000 1244 torch/_subclasses/fake_tensor.py:2017] [0/1] return self._op(*args, **kwargs)
E0912 06:51:36.997000 1244 torch/_subclasses/fake_tensor.py:2017] [0/1] RuntimeError: aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:3324: SymIntArrayRef expected to contain only concrete integers
tensor([[1., 1., 1.],
[1., 1., 1.]])
---------------------------------------------------------------------------
TorchRuntimeError Traceback (most recent call last)
[<ipython-input-3-8a1964a9495f>](https://localhost:8080/#) in <cell line: 13>()
11
12 out2 = torch.empty(3, 4)
---> 13 opt_model((3, 4), out2)
14 print(out2)
31 frames
[/usr/local/lib/python3.10/dist-packages/torch/_ops.py](https://localhost:8080/#) in __call__(self, *args, **kwargs)
714 # that are named "self". This way, all the aten ops can be called by kwargs.
715 def __call__(self, /, *args, **kwargs):
--> 716 return self._op(*args, **kwargs)
717
718 # Use positional-only argument to avoid naming collision with aten ops arguments
TorchRuntimeError: Failed running call_function <built-in method ones of type object at 0x7914c7f93860>(*((s2, s3),), **{'out': FakeTensor(..., size=(s0, s1))}):
aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:3324: SymIntArrayRef expected to contain only concrete integers
from user code:
File "<ipython-input-3-8a1964a9495f>", line 4, in ones_fn
return torch.ones(size, out=out)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Minified repro
torch.ones
```
import torch
def ones_fn(size, out):
return torch.ones(size, out=out)
opt_model = torch.compile(ones_fn)
out1 = torch.empty(2, 3)
opt_model((2, 3), out1)
print(out1)
out2 = torch.empty(3, 4)
opt_model((3, 4), out2)
print(out2)
```
torch.zeros
```
import torch
def zeros_fn(size, out):
return torch.zeros(size, out=out)
opt_model = torch.compile(zeros_fn)
out1 = torch.empty(2, 3)
opt_model((2, 3), out1)
print(out1)
out2 = torch.empty(3, 4)
opt_model((3, 4), out2)
print(out2)
```
torch.empty
```
import torch
def empty_fn(size, out):
return torch.empty(size, out=out)
opt_model = torch.compile(empty_fn)
out1 = torch.empty(2, 3)
opt_model((2, 3), out1)
print(out1)
out2 = torch.empty(3, 4)
opt_model((3, 4), out2)
print(out2)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0.dev20240911+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] torch==2.5.0.dev20240911+cpu
[pip3] torchaudio==2.4.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.19.0+cu121
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,521,585,751 | PowerToys | Copy on selection of text | ### Description of the new feature / enhancement
An option to enable copy on selection as in X11/Linux systems. I.e. put text that is selected automatically on the clipboard.
I constantly find myself selecting text and pasting something different on Windows.
### Scenario when this would be used?
Any text handling application like Notepad, VSCode, in browsers, Word+++
### Supporting information
There are some posts about this feature around, but they seem to be pretty misunderstood by none Linux users. E.g.:
https://answers.microsoft.com/en-us/windows/forum/all/automatically-copy-selected-text/4d2b0fc3-207b-404c-9468-2156266cb85a | Idea-New PowerToy,Needs-Triage | low | Minor |
2,521,595,767 | kubernetes | [Flaking Test] [EventedPLEG] Containers Lifecycle should continue running liveness probes for restartable init containers and restart them while in preStop | ### Which jobs are flaking?
ci-crio-cgroupv1-evented-pleg
- https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-crio-cgroupv1-evented-pleg/1834117609461649408
### Which tests are flaking?
E2eNode Suite.[It] [sig-node] [NodeConformance] Containers Lifecycle when a pod is terminating because its liveness probe fails should continue running liveness probes for restartable init containers and restart them while in preStop [NodeConformance]
### Since when has it been flaking?
8/24
https://storage.googleapis.com/k8s-triage/index.html?date=2024-09-12&job=ci-crio-cgroupv1-evented-pleg&test=%20Containers%20Lifecycle%20when%20a%20pod%20is%20terminating%20because%20its%20liveness%20probe%20fails%20should%20continue%20running%20liveness%20probes%20for%20restartable%20init%20containers%20and%20restart%20them%20while%20in%20preStop%20
### Testgrid link
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-evented-pleg
### Reason for failure (if possible)
```
{ failed [FAILED] Expected an error to have occurred. Got:
<nil>: nil
In [It] at: k8s.io/kubernetes/test/e2e_node/container_lifecycle_test.go:903 @ 08/23/24 18:16:25.131
}
```
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig node | priority/backlog,sig/node,kind/flake,triage/accepted | low | Critical |
2,521,599,556 | opencv | Java bindings dlls files in jars not working for runnable jars in eclipse | ### System Information
opencv 4.10.0
win11 java openjdk 22
eclipse ide for java 2024-06
### Detailed description
there are no jar files that encapsulate native dll files loading for jar-in-jar runnable jar loading in eclipse.
even if you put the dll files inside the runnable jar, they are not loaded. if the dll file is next to the jar, it will load.
goal is to have an independent fat runnable jar file including all jars and native dlls.
refer to example of lwjgl of native dll etc files encapsulation into jar files.
### Steps to reproduce
export a working running java opencv project in eclipse as runnable jar (modules or classpath).
try to make the runnable jar working by copying the native dlls into the jar root.
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | feature,category: build/install,category: java bindings | low | Minor |
2,521,604,946 | godot | TileMapLayers are emitting `changed()` signal after get_tree().quit() is called. | ### Tested versions
v4.3.stable.arch_linux
### System information
Godot v4.3.stable unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Tue, 10 Sep 2024 14:37:32 +0000 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3090 (nvidia; 560.35.03) - AMD Ryzen 7 5800X 8-Core Processor (16 Threads)
### Issue description
When `get_tree().quit()` is called, `TileMapLayer` nodes emit their `changed()` signal after their `TileSet` has been removed from the scene tree. This means that code connected to the changed signal that tries to access properties on the `TileSet` will error for trying to access properties on a null value.
This behaviour is not present on TileMaps in previous versions of Godot (confirmed in 4.2).
### Steps to reproduce
1. Create a `TileMapLayer` with a `TileSet`.
2. Connect a script to the `TileMapLayer` `changed()` signal that accesses `tile_set.tile_size`
3. Call `get_tree().quit()` or close game via the OS close button on the window.
4. Game hangs due to trying to access `tile_set` on a null object.
### Minimal reproduction project (MRP)
See above steps.
I have shared a minimal reproduction project for triggering this issue while using PhantomCamera here: https://github.com/Jack-023/phantom-camera-error-repro
This was set up before determining that this is a Godot issue so it is not technically minimal in the context of Godot but it does demonstrate the issue. | bug,topic:2d | low | Critical |
2,521,607,876 | tensorflow | An `aborted issue` could be raised in TensorFlow when I used API `math_ops.cast` and `array_ops.split` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf-nightly 2.18.0.dev20240817
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
An `aborted issue` could be raised in TensorFlow when I used API `math_ops.cast` and `array_ops.split` .
### Standalone code to reproduce the issue
```shell
import numpy as np
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import math_ops
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
with sess.as_default():
a = math_ops.cast([2], dtypes.int32)
b = math_ops.cast([1], dtypes.int32)
value = np.random.rand(11, 11)
array_ops.split(value, [a, b])
```
### Relevant log output
```shell
2024-09-12 15:49:03.711972: F tensorflow/core/framework/tensor_shape.cc:45] Check failed: NDIMS == dims() (1 vs. 2)Asking for tensor of 1 dimensions from a tensor of 2 dimensions
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,521,618,852 | tensorflow | Got one aborted issue when using `data_flow_ops.MapStagingArea` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf-nightly 2.18.0.dev20240817
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
An `aborted issue` could be raised in TensorFlow when I used API `data_flow_ops.MapStagingArea`. The code is as follows:
### Standalone code to reproduce the issue
```shell
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import test
import tensorflow as tf
with ops.Graph().as_default() as g:
with ops.device('/cpu:0'):
x = array_ops.placeholder(dtypes.float32)
pi = array_ops.placeholder(dtypes.int64)
gi = array_ops.placeholder(dtypes.int64)
v = 2.0 * (array_ops.zeros([]) + x)
with ops.device(test.gpu_device_name()):
stager = data_flow_ops.MapStagingArea([dtypes.float32])
stage = stager.put(pi, [v], [0])
k, y = stager.get([])
y = math_ops.reduce_max(math_ops.matmul(y, y))
g.finalize()
with tf.compat.v1.Session(graph=g) as sess:
sess.run(stage, feed_dict={x: -1, pi: 0})
for i in range(10):
_, yval = sess.run([stage, y], feed_dict={x: i, pi: i + 1, gi: i})
```
### Relevant log output
```shell
2024-09-12 15:57:44.445189: F tensorflow/core/framework/tensor.cc:852] Check failed: 1 == NumElements() (1 vs. 0)Must have a one element tensor
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,521,630,173 | tensorflow | Using `gen_random_index_shuffle_ops.random_index_shuffle` with `rounds=-2` can cause a crash | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf-nightly 2.18.0.dev20240817
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
A `Segmentation fault` could be raised in TensorFlow when I used API `gen_random_index_shuffle_ops.random_index_shuffle` with `rounds=-2`. The code is as follows:
### Standalone code to reproduce the issue
```shell
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import gen_random_index_shuffle_ops
from tensorflow.python.ops import math_ops
seed = (74, 117)
seed_dtype = dtypes.int32
max_index = 129
index_dtype = dtypes.int32
rounds = 4
seen = (max_index + 1) * [False]
seed = math_ops.cast([seed[0], seed[1], 42], seed_dtype)
for index in range(max_index + 1):
new_index = gen_random_index_shuffle_ops.random_index_shuffle(math_ops.cast(index, index_dtype), seed, max_index=math_ops.cast(max_index, index_dtype), rounds=rounds)
# rounds = -2 causes the segmentfault
new_index = gen_random_index_shuffle_ops.random_index_shuffle(math_ops.cast(index, index_dtype), seed, max_index=math_ops.cast(max_index, index_dtype), rounds=-2)
```
### Relevant log output
```shell
> Segmentation fault (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,521,634,452 | TypeScript | Generators can return undefined values | ## Summary
Should the return type of `yield* a` be `TReturn | undefined` when in strict mode as opposed to `TReturn` (where `TReturn` is the value of the `TReturn` type argument to `a`'s `Iterable` type instantiation)?
Or should the type of `IteratorReturnResult.value` be changed to `TReturn | undefined` when in strict mode (and potentially the type of `Generator.return(value: TReturn)` be relaxed to `Generator.return(value?: TReturn)`)?
Have you considered these scenarios? (A github microsoft site search for "generator return undefined" yields no results).
## Background
Generators will return `undefined` values when:
1) `next()` is called after the generator has already returned its return value:
```TypeScript
function* gen(): Generator<number, string> {
yield 42;
return "done";
}
const a = gen();
console.log(a.next()); // { value: 42, done: false }
console.log(a.next()); // { value: 'done', done: true }
console.log(a.next()); // { value: undefined, done: true }
```
2.1) the generator is interrupted via a `break` statement of `for-of` loop on the generator
```TypeScript
const nestedIterableIterator: IterableIterator<string, string, unknown> = {
[Symbol.iterator]() { return this; },
next() {
return { value: "42", done: false }
},
return(value: string) {
console.log("return called with", value);
return { value, done: true }
}
}
function* generator() {
yield* nestedIterableIterator;
return "never";
}
const it = generator();
it.next();
it.return("passthrough"); // return called with passthrough
for (const x of generator()) {
break; // return called with undefined
}
```
2.2) the generator is manually interrupted by calling its `return` method without arguments:
```TypeScript
function* gen(): Generator<number, string> {
yield 42;
yield 43;
return "done";
}
const a = gen();
console.log(a.next()); // { value: 42, done: false }
console.log(a.return(undefined as unknown as string)); // { value: undefined, done: true }
console.log(a.next()); // { value: undefined, done: true }
```
3) The generator is attempted to be iterated over using `yield*` after it has already returned:
```TypeScript
function* gen(): Generator<number, string> {
return "done";
}
function* consumer() {
const a = gen();
console.log(yield* a); // done
console.log(yield* a); // undefined
}
consumer().next();
```
## Effect
The type signature of `Generator.return` guards against 2.2, and there is no way to get the return value from a `for-of` loop.
That leaves scenarios 1 and 3, of which 1 is likely uncommon, 3 is probably the only viable scenario and only in the case of user error...
### ⚙ Compilation target
ES2015
### ⚙ Library
lib.es2015.generator.d.ts
### Missing / Incorrect Definition
`Generator.return` and `IteratorReturnResult.value`
### Sample Code
See above.
### Documentation Link
No relevant documentation. | Needs Investigation | low | Critical |
2,521,646,877 | pytorch | A segmentation fault will be raised when using the `torch._C._jit_to_backend` | ### 🐛 Describe the bug
A segmentation fault will be raised when using the `torch._C._jit_to_backend`. The code is as follows:
```python
import io
import os
import sys
import torch._C
from torch.jit.mobile import _load_for_lite_interpreter
from torch.testing._internal.common_utils import find_library_location
pytorch_test_dir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
sys.path.append(pytorch_test_dir)
class BasicModuleAdd(torch.nn.Module):
def forward(self, x, h):
return x + h
lib_file_path = find_library_location('libbackend_with_compiler.so')
torch.ops.load_library(str(lib_file_path))
scripted_module = torch.jit.script(BasicModuleAdd())
compile_spec = {'forward': {'input_shapes': '((1, 1, 320, 240), (1, 3))', 'some_other_option': 'True'}}
lowered_module = torch._C._jit_to_backend('backend_with_compiler_demo', scripted_module, compile_spec)
buffer = io.BytesIO(lowered_module._save_to_buffer_for_lite_interpreter())
buffer.seek(0)
mobile_module = _load_for_lite_interpreter(buffer)
input0 = torch.ones(0, dtype=torch.float)
input1 = (input0, input0)
backend_output = lowered_module(*input1)
mobile_output = mobile_module(*input1)
```
Error message:
> Segmentation fault (core dumped)
The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` .
### Versions
Collecting environment information...
PyTorch version: 2.5.0.dev20240815+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 900.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxscript==0.1.0.dev20240816
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240815+cpu
[pip3] torch-xla==2.4.0
[pip3] torch_xla_cuda_plugin==2.4.0
[pip3] torchaudio==2.4.0.dev20240815+cu121
[pip3] torchvision==0.20.0.dev20240815+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cpu pypi_0 pypi
[conda] torch-xla 2.4.0 pypi_0 pypi
[conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit,module: crash | low | Critical |
2,521,660,963 | tensorflow | Using `fft_ops` would cause an `aborted issue` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf-nightly 2.18.0.dev20240817
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
An `aborted issue` could be raised in TensorFlow when using `fft_ops`. The code is as follows:
### Standalone code to reproduce the issue
```shell
import numpy as np
from tensorflow.python.ops.signal import fft_ops
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
def _tf_ifft(x, rank, fft_length=None, feed_dict=None):
with tf.compat.v1.Session() as sess:
return sess.run(_tf_ifft_for_rank(rank)(x, fft_length), feed_dict=feed_dict)
def _tf_ifft_for_rank(rank):
if rank == 1:
return fft_ops.irfft
elif rank == 2:
return fft_ops.irfft2d
elif rank == 3:
return fft_ops.irfft3d
else:
raise ValueError('invalid rank')
rank = 1
extra_dims = 0
np_rtype = np.float32
np_ctype = np.complex64
dims = rank + extra_dims
x = np.zeros((1,) * dims).astype(np_ctype)
tmp_var22 = _tf_ifft(x, rank).shape
```
### Relevant log output
```shell
DUCC FFT c2r failed:
bazel-out/k8-opt/bin/external/ducc/_virtual_includes/fft/ducc/src/ducc0/fft/fft1d_impl.h: 2948 (static Trpass<Tfs> ducc0::detail_fft::rfftpass<float>::make_pass(size_t, size_t, size_t, const Troots<Tfs> &, bool) [Tfs = float]):
Assertion failure
no zero-sized FFTs
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,521,702,936 | flutter | [Web][Edge][Production] Spacebar key sometimes does not work | ### Steps to reproduce
Flutter web, app running on MS Edge, production (not durign dev mode).
1. If you have one or more TextFields inside a SingleChildScrollView (mostly observed when: the page is so long that it doesn't need to actually scroll), the spacebar might not work when clicked on.
2. If you hit F5 to "soft" refresh the page, the spacebar works again.
3. MS Edge (tested and observed on v128.0.2739.67)
4. Chrome browsers works as expected here.
### Expected results
I Want the spacebar to produce a space in the textfield
### Actual results
Nothing happens sometimes when I hit spacebar .
### Code sample
<details open><summary>Code sample</summary>
```dart
return Column(
crossAxisAlignment: CrossAxisAlignment.stretch,
children: [
Expanded(
child: SingleChildScrollView(
controller: ScrollController(),
child: Card(
child: Padding(
padding: const EdgeInsets.all(4.0),
child: Column(
children: [
Padding(
padding: const EdgeInsets.all(4.0),
child: Center(child: Text(AppLocalizations.of(context).addRequestsCardCreateRequestDraft,style: const TextStyle(fontSize: 16),),),
),
[...]
Row(
children: [
Expanded(
child: Padding(
padding: const EdgeInsets.all(4.0),
child: TextField(
maxLength: 40,
controller: infTEC,
decoration: InputDecoration(
counterText: "",
floatingLabelBehavior: FloatingLabelBehavior.always,
border: const OutlineInputBorder(),
label: Row(
mainAxisSize: MainAxisSize.min,
children: [
Text(AppLocalizations.of(context).addRequestsCardShortAssignmentDescription),
SizedBox(width: 4,),
Text("*", style: TextStyle(color: Colors.red, fontSize: 18, fontWeight: FontWeight.bold),)
],
),
),
),
),
),
Expanded(
child: Padding(
padding: const EdgeInsets.all(4.0),
child: Tooltip(
message: AppLocalizations.of(context).addRequestsCardDraftCommentWillOnlyBeVisibleInLocalStoredDraftAndInTheRequestConfirmingEmail,
child: TextField(
maxLines: null,
controller: commentTEC,
decoration: InputDecoration(
floatingLabelBehavior: FloatingLabelBehavior.always,
border: const OutlineInputBorder(),
label: Text(AppLocalizations.of(context).addRequestsCardDraftComment),
),
),
),
),
),
],
),
[...]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel beta, 3.25.0-0.1.pre, on Microsoft Windows [Version 10.0.22631.4169], locale nb-NO)
• Flutter version 3.25.0-0.1.pre on channel beta at C:\Flutter\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision d3733fcb0e (3 weeks ago), 2024-08-21 10:27:04 -0500
• Engine revision 062b3a72fc
• Dart version 3.6.0 (build 3.6.0-149.3.beta)
• DevTools version 2.38.0
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at c:/android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: c:\program files\java\jdk-17\bin\java
• Java version Java(TM) SE Runtime Environment (build 17.0.11+7-LTS-207)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.1)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.10.34928.147
• Windows 10 SDK version 10.0.22621.0
[√] Android Studio (version 2023.3)
• Android Studio at C:\Program Files\Android Studio\Android Studio 2023.3.1.20
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0--11572160)
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android Studio\Android Studio 2024.1.2.12
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android Studio\Android Studio 2024.1.2.8
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] Android Studio (version 2024.2)
• Android Studio at C:\Program Files\Android Studio\Android Studio 2024.2.2.1
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[√] Connected device (4 available)
• SM T515 (mobile) • R52NB09L7ZP • android-arm • Android 11 (API 30)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4169]
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.120
• Edge (web) • edge • web-javascript • Microsoft Edge 128.0.2739.67
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| a: text input,c: regression,platform-web,e: web_canvaskit,a: production,browser: edge,P2,team-text-input,triaged-text-input | low | Major |
2,521,709,158 | pytorch | cudaFree will be called when the reserved memory > 70G for A100 80G | ### 🐛 Describe the bug
Hello, When I profiling the Llama-2 (70B) model training, the sequence length is varied of each step. I found when the sequence length > 200, at the same time, the reserved memory > 70B, the cudaFree will be called, then cudaMalloc will be called.
I used the feature 'export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True', but it didn't work.
So I want to ask how to reduce the cudaFree and cudaMalloc?
### Versions
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.4 (Ootpa) (x86_64)
GCC version: (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1)
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.28
Python version: 3.9.2 (default, Mar 5 2021, 01:49:45) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] (64-bit runtime)
Python platform: Linux-4.18.0-305.el8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7543 32-Core Processor
Stepping: 1
CPU MHz: 2965.950
BogoMIPS: 5589.98
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-7,64-71
NUMA node1 CPU(s): 8-15,72-79
NUMA node2 CPU(s): 16-23,80-87
NUMA node3 CPU(s): 24-31,88-95
NUMA node4 CPU(s): 32-39,96-103
NUMA node5 CPU(s): 40-47,104-111
NUMA node6 CPU(s): 48-55,112-119
NUMA node7 CPU(s): 56-63,120-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall sev_es fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.2.1
[pip3] torchvision==0.17.1
[pip3] triton==2.2.0
[conda] blas 1.0 mkl
[conda] mkl 2019.0 118
[conda] mkl-service 1.1.2 py37h90e4bf4_5
[conda] mkl_fft 1.0.4 py37h4414c95_1
[conda] mkl_random 1.0.1 py37h4414c95_1
[conda] numpy 1.15.1 py37h1d66e8a_0
[conda] numpy-base 1.15.1 py37h81de0dd_0
[conda] numpydoc 0.8.0 py37_0
cc @ptrblck @msaroufim | module: cuda,module: memory usage,triaged,onnx-triaged | low | Critical |
2,521,723,080 | tensorflow | `tf_cond.cond` and `tf.function` could cause an aborted issue | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf-nightly 2.18.0.dev20240817
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04.3 LTS (x86_64)
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
An `aborted issue` could be raised in TensorFlow when using `tf_cond.cond` and `tf.function`. The code is as follows:
### Standalone code to reproduce the issue
```shell
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import ops
from tensorflow.python.ops import cond as tf_cond
from tensorflow.python.ops import control_flow_assert
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.platform import test
import tensorflow as tf
sess = tf.compat.v1.Session()
@tf.function
def func1():
with sess.as_default():
with ops.device(test.gpu_device_name()):
pred = constant_op.constant([True, False])
def fn1():
return control_flow_ops.no_op()
def fn2():
with ops.device('/cpu:0'):
return control_flow_assert.Assert(False, ['Wrong!'])
r = tf_cond.cond(pred, fn1, fn2)
func1()
```
### Relevant log output
```shell
> 2024-09-12 16:43:14.634438: F tensorflow/core/framework/tensor.cc:852] Check failed: 1 == NumElements() (1 vs. 2)Must have a one element tensor
> Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,521,743,115 | TypeScript | Inconsistent behaviour of `-?` | ### 🔎 Search Terms
optional mapped type
### 🕗 Version & Regression Information
- This changed between versions 5.4.5 and 5.5.4
### ⏯ Playground Link
https://www.typescriptlang.org/play/?exactOptionalPropertyTypes=true#code/C4TwDgpgBA8mwEsD2A7AhgGwCpIKouRQHUFgALfAEwgDMEUJKAeLAPigF4oBvAKCgFQA2gGko9KAGsIIJDShYAugC4FUCAA9gEFJQDOUAEoQAxkgBOzEQBoFoxa36DnAfjsjFT5wNVZ7UAB8oAFddWnpGXgBfXl5QSCgASRRtcwBbBEoENG1OWHhCTBx8QhJyKnCGZm4oNBdVPWBzegBzKCj2AHpOnlr6qEbmlDag0Oo6Kva48GgAGVIIc0w8mrqGptbAkLCJxnaoboG0NOh46DQDZNSMrJyIWLOjCD0ARjzjAEdghHNGJivFjdstouj1Vushm0Yo9jHoAEzvCBfH5-eapTCg3poCGbUY7CKUdpAA
### 💻 Code
```ts
type OptionalToUnionWithUndefined<T> = {
[K in keyof T]: T extends Record<K, T[K]>
? T[K]
: T[K] | undefined
}
type Intermidiate = OptionalToUnionWithUndefined<{ a?: string }> // { a?: string | undefined }
type Literal = { a?: string | undefined } // same type as Intermidiate
type Res1 = Required<Intermidiate> // { a: string }
type Res2 = Required<Literal> // { a: string | undefined }
```
### 🙁 Actual behavior
`Res1` and `Res2` are different types
### 🙂 Expected behavior
they should be the same because the inputs are
### Additional information about the issue
This issue may share the cause with #59902 | Bug,Help Wanted | low | Minor |
2,521,796,775 | vscode | Links to files starting with ./ do not work correctly | Type: <b>Bug</b>
1) Connect to some remote machine.
2) Create a folder, for example "tmp";
3) Create a file in this folder, for example "tmp/text.txt" (you can add some data if you want, it won't change the behavior);
4) Open this folder in vscode;
5) Type `echo "./text.txt:1"` in terminal;
When clicking (with alt) on the `./text.txt:1` part of the command, the file will not be found. When clicking on the echo output, the file is sometimes found, sometimes not.
"text.txt:1" or absolute path works fine.
Note: line number doesn't matter (for example, if I have a file with 100 lines and the link points to line 22, it still won't work).
If you want, you can use this script to automate the file creation:
```bash
mkdir tmp
touch tmp/text.txt
echo "line 1
line 2" >> tmp/text.txt
```
The reason this is important: some extensions use the syntax like in my example (TestMate for example) and it stopped working/never worked.
VS Code version: Code 1.93.0 (4849ca9bdf9666755eb463db297b69e5385090e3, 2024-09-04T13:02:38.431Z)
OS version: Linux x64 6.10.6-amd64
Modes:
Remote OS version: Linux x64 5.15.0-70-generic
Remote OS version: Linux x64 5.15.0-70-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i5-12450H (12 x 1890)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|2, 1, 1|
|Memory (System)|15.34GB (1.98GB free)|
|Process Argv|--crash-reporter-id d6824b2b-3e5a-4bc1-84f4-7b1e54a44ccc|
|Screen Reader|no|
|VM|75%|
|DESKTOP_SESSION|gnome|
|XDG_CURRENT_DESKTOP|GNOME|
|XDG_SESSION_DESKTOP|gnome|
|XDG_SESSION_TYPE|wayland|
|Item|Value|
|---|---|
|Remote|SSH: astra-1.7.4-local|
|OS|Linux x64 5.15.0-70-generic|
|CPUs|12th Gen Intel(R) Core(TM) i5-12450H (6 x 0)|
|Memory (System)|7.76GB (5.82GB free)|
|VM|100%|
|Item|Value|
|---|---|
|Remote|SSH: astra-1.7.4-local|
|OS|Linux x64 5.15.0-70-generic|
|CPUs|12th Gen Intel(R) Core(TM) i5-12450H (6 x 0)|
|Memory (System)|7.76GB (5.82GB free)|
|VM|100%|
</details><details><summary>Extensions (79)</summary>
Extension|Author (truncated)|Version
---|---|---
better-comments|aar|3.0.2
tsl-problem-matcher|amo|0.6.2
Doxygen|bbe|1.0.0
vscode-doxygen-runner|bet|1.8.0
ruff|cha|2024.48.0
cmake-format|che|0.6.11
path-intellisense|chr|2.9.0
codeium|Cod|1.14.12
doxdocgen|csc|1.4.0
vscode-markdownlint|Dav|0.56.0
python-environment-manager|don|1.2.4
python-extension-pack|don|1.7.0
gitlens|eam|15.4.0
vscode-hide-comments|eli|1.9.0
restore-terminals|Eth|1.1.8
todo-tree|Gru|0.0.226
vscode-test-explorer|hbe|2.21.1
vscode-drawio|hed|1.6.6
gtest-snippets|idm|1.0.1
latex-workshop|Jam|10.3.2
vsc-python-indent|Kev|1.18.0
kanban|lba|1.8.1
vscode-clangd|llv|0.1.29
vscode-catch2-test-adapter|mat|4.12.0
json|Mee|0.1.2
git-graph|mhu|1.30.0
vscode-docker|ms-|1.29.2
vscode-dotnet-runtime|ms-|2.1.5
autopep8|ms-|2024.0.0
black-formatter|ms-|2024.2.0
debugpy|ms-|2024.10.0
flake8|ms-|2023.10.0
isort|ms-|2023.10.1
mypy-type-checker|ms-|2024.0.0
pylint|ms-|2023.10.1
python|ms-|2024.14.0
vscode-pylance|ms-|2024.9.1
jupyter|ms-|2024.8.1
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-powertoys|ms-|0.1.1
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.384.0
remote-ssh|ms-|0.114.2
remote-ssh-edit|ms-|0.86.0
remote-wsl|ms-|0.88.3
vscode-remote-extensionpack|ms-|0.25.0
cmake-tools|ms-|1.19.51
cpptools|ms-|1.22.2
cpptools-extension-pack|ms-|1.3.0
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
test-adapter-converter|ms-|0.1.9
autodocstring|njp|0.6.1
pytest-runner|pam|0.0.7
vscode-xml|red|0.27.1
vscode-yaml|red|1.15.0
command-variable|rio|1.65.4
markdown-preview-enhanced|shd|0.8.14
swdc-vscode|sof|2.8.1
remote-x11|spa|1.5.0
remote-x11-ssh|spa|1.5.0
BuildOutputColorizer|Ste|0.1.6
code-spell-checker|str|3.0.1
code-spell-checker-british-english|str|1.4.11
code-spell-checker-russian|str|2.2.2
code-spell-checker-scientific-terms|str|0.2.2
cmantic|tde|0.9.0
cmake|twx|0.0.17
vscode-lldb|vad|1.10.0
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
jinja|who|0.0.8
autocomplete-english-word|wus|0.1.7
clang-format|xav|1.9.0
ds-toolkit-vscode|yan|0.1.2
markdown-all-in-one|yzh|3.6.2
cmake-highlight|zch|0.0.2
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaat:30438848
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
945dj816:31013170
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl1:31134654
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
wkspc-ranged-t:31125599
defaultse:31133495
fje88620:31121564
```
</details>
<!-- generated by issue reporter --> | bug,terminal-links | low | Critical |
2,521,814,820 | ant-design | Menu item divider 的 borderTopWidth 和 marginBlock的css属性公用一个变量,不能单独设置 | ### What problem does this feature solve?
Mnut Item Divider可以设置相关css,marginBlock属性1px,如果设置lineWidth: 0,那么margin和border就没有了

### What does the proposed API look like?
[menu/style/index.ts#L666-L667](https://github.com/ant-design/ant-design/blob/22fb6f67fa10e3cdc0e382c5e73869846e67de02/components/menu/style/index.ts#L666-L667)
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | low | Minor |
2,521,838,805 | vscode | Decoration Ranges are not always adjusted immediately | The strike through was applied to the previous decoration:

Here the buggy frame:

I replaced the line
```ts
import { ColumnRange, applyObservableDecorations } from '../../inlineCompletions/browser/utils.js';
```
with
```ts
import { ColumnRange } from '../../inlineCompletions/browser/utils.js';
```
@mattbierner @alexdima do you have ideas? | bug,papercut :drop_of_blood: | low | Critical |
2,521,854,208 | bitcoin | cmake: adjust Find modules to try `find_package` & fallback to `pkg_check_modules` | See the discussion in: https://github.com/bitcoin/bitcoin/pull/30803#discussion_r1743507523.
> I assume one thing we could do, is try find_package(ZeroMQ) first, and fallback to pkg_check_modules(libzmq), and just maintain this (pretty much) forever. That might be better than somewhat vauge comments about "modern" distros, because, (some) modern distros do already support this, and it's actually the case that all versions of all distros Bitcoin Core supports need to do this, before we could do a complete switch.
| Build system | low | Minor |
2,521,858,300 | PowerToys | cannot uninstall after installed DetectedPowerToysUserVersion=0.74.1.0 and DetectedPowerToysVersion=0.82.1.0 | ### Microsoft PowerToys version
0.74.1.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Installer
### Steps to reproduce
1. install DetectedPowerToysUserVersion=0.74.1.0
2. install DetectedPowerToysVersion=0.82.1.0
3. install a newer version of "0.84.1" or try in uninstall 0.74 or 0.82 with those install exec.
### ✔️ Expected Behavior
upgrade successfully or uninstall successfully
### ❌ Actual Behavior
setup failed
[powertoys-bootstrapper-msi-0.82.1_20240912171930.log](https://github.com/user-attachments/files/16976852/powertoys-bootstrapper-msi-0.82.1_20240912171930.log)

### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,521,861,015 | material-ui | [docs] MUI-X pages give 404 in v5 docs | ### Related page
https://v5.mui.com/x/react-date-pickers/
### Kind of issue
Other
### Issue description
If you go to https://v5.mui.com/material-ui/getting-started/ and click one of the MUI-X links on the sidebar, it shows a 404 error.

User report on discord: https://discord.com/channels/1131323012554174485/1131329150070816868/1283709072901804032
### Context
_No response_
**Search keywords**: none | scope: docs-infra,support: docs-feedback | low | Critical |
2,521,869,135 | tauri | [feat] Listen to keyboard layout changes | ### Describe the problem
Be able to listen to keyboard layout changes in a cross-platform way. This is useful for apps that depend on the current layout for localization or accessibility.
Tao already listens to keyboard layout changes on Windows and MacOS, but doesn't expose them via a public API (ref [Windows](https://github.com/tauri-apps/tao/blob/ad652e50bfca1195481cd347ccaa486818f9334d/src/platform_impl/windows/event.rs#L135), [MacOS](https://github.com/tauri-apps/tao/blob/ad652e50bfca1195481cd347ccaa486818f9334d/src/platform_impl/macos/event.rs#L79)).
On Windows, it's particularly annoying to hook into keyboard layout changes (e.g. `WM_INPUTLANGCHANGE` is only broadcast to windows _if they're focused_), and creating a secondary keyboard hook to listen to these changes is expensive.
### Describe the solution you'd like
Expose a keyboard layout change event
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,521,910,952 | ollama | ROCm 6.2 upgrade? | Would it be possible to upgrade from 6.1.2 -> 6.2 for rocm?
It has [improved vLLM support](https://rocm.docs.amd.com/en/docs-6.2.0/about/release-notes.html#improved-vllm-support), which I assume would be advantageous for ollama.
Thx | feature request,amd | low | Major |
2,521,925,404 | TypeScript | TS 5.6 requires composite projects with noEmit to have fully accessible types, unlike 5.5 | ### 🔎 Search Terms
TS2742, TS7056, TS9006, TS4058, TS4023, noEmit, composite, 5.6
### 🕗 Version & Regression Information
- This changed between versions 5.5.4 and 5.6.2
### ⏯ Playground Link
https://github.com/Ambroos/ts-56-composite-project-noemit
### 💻 Code
file: **Internal.ts**
```ts
interface Internal {
a: boolean,
}
export default function somethingInternal() {
return {
one: { a: true } as Internal,
};
}
```
file: **index.ts**
```ts
import createInternal from "./Internal";
export const one = createInternal().one;
```
file: **tsconfig.json**
```json
{
"compilerOptions": {
"composite": true,
"noEmit": true,
"strict": true,
},
"include": [
"*.ts"
]
}
```
### 🙁 Actual behavior
In TS 5.6.2, run `tsc` and get error:
```
index.ts:3:14 - error TS4023: Exported variable 'one' has or is using name 'Internal' from external module "/Users/ambroos/Dev/ts-composite-5.6/Internal" but cannot be named.
3 export const one = createInternal().one;
```
This is the error in the sample project linked above (or created with the files from the code part). In our actual repo a large amount of new errors appeared that are related: TS2742, TS7056, TS9006, TS4058, TS4023.
### 🙂 Expected behavior
No errors, like 5.5.4 and older.
### Additional information about the issue
This only happens in composite projects, even when not using `--build`. We have a large composite project, but the oldest parts of our code live in projects that don't emit since no-one is allowed to depend on them (they're apps that get built as-is).
In the past, when not emitting type declarations, it was OK to reference types that cannot be named, even in incremental compilation / composite projects. Now however it looks like because `.tsbuildinfo` is always generated, making code able to be emitted seems to have become required.
Maybe this is intentional, but it would be nice to still have some escape hatch for these types of composite project parts that don't actually need to emit declarations, they just need to be checked and transpiled. | Needs Investigation | low | Critical |
2,521,938,940 | next.js | Missing `sass-embedded` peer dependency | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/gracious-voice-8mxdxl?workspaceId=eb53ed51-e5b9-4293-8298-f0747af5ed78
### To Reproduce
1. `create-next-app`
2. add `sass-embedded`
3. create a `.scss` file and import
4. `next dev`
### Current vs. Expected behavior
## Current
```console
next tried to access sass (a peer dependency) but it isn't provided by its ancestors; this makes the require call ambiguous and unsound.
Required package: sass
Required by: next@npm:14.2.8 (via C:\<WORKSPACE>\node_modules\next\dist\compiled\sass-loader\)
```
```console
To use Next.js' built-in Sass support, you first need to install `sass`.
Run `npm i sass` or `yarn add sass` inside your workspace.
Learn more: https://nextjs.org/docs/messages/install-sass
```
## Expected:
No error and `sass-embedded` ia an optional peer dependency
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Home
Available memory (MB): 32510
Available CPU cores: 16
Binaries:
Node: 22.3.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.8 // There is a newer version (14.2.10) available, upgrade recommended!
eslint-config-next: 14.2.8
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
⚠ There is a newer version (14.2.10) available, upgrade recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Runtime, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
`sass-embedded` support is already added in #64577 (discussion #36160) | create-next-app,bug,Webpack,Runtime | low | Critical |
2,522,008,873 | transformers | Whisper Beam Search doesn't work | ### System Info
```
- `transformers` version: 4.45.0.dev0
- Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): 2.15.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce GTX 1070 Ti
```
### Who can help?
@ylacombe @eustlb
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Download an audio sample https://drive.google.com/file/d/1eVeFUyfHWMpmFSRYxmBWaNe_JLEQqT8G/view?usp=sharing
2. Use transformers v4.41 + my fix from #32970 (it allows to output sequence_score)
3. Run the code below to get 5 hypotheses of Beam Search on audio transcription
```python
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
import torch
import librosa
# Load the processor and model
processor = AutoProcessor.from_pretrained("openai/whisper-tiny")
model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-tiny")
# Load and preprocess the audio file
audio_path = "audio.mp3"
audio, sr = librosa.load(audio_path, sr=16000) # Ensure the sample rate is 16kHz
# Preprocess the audio to get the input features
inputs = processor(audio, sampling_rate=16000, return_tensors="pt")
# Generate the transcription using Beam Search with the model
beam_outputs = model.generate(
inputs["input_features"],
num_beams=5, # Number of beams
num_return_sequences=5, # Number of hypotheses to return
early_stopping=True,
output_scores=True,
return_dict_in_generate=True,
)
# Decode the generated transcriptions
hypotheses = [processor.decode(output_ids, skip_special_tokens=True) for output_ids in beam_outputs.sequences]
# Print out the hypotheses
for i, hypothesis in enumerate(hypotheses):
print(f"Hypothesis {i + 1}: {hypothesis}. Score: {beam_outputs.sequences_scores[i]}")
```
### Expected behavior
Together with @ylacombe we identified that after Pull Request #30984 Whisper Beam Search generation doesn't work as intended.
See more detailed discussion on Pull Request #32970
The code above must return 5 unique hypotheses due to the core principle of the Beam Search - to select `num_beams` best tokens in a top_k sampling fashion. Instead, we are getting the same results with the highest probability. See below for how Beam Search used to work in version v4.25.1 and how it works now.
transformers v4.25.1
```
Hypothesis 1: How is Mozilla going to handle and be with this? Thank you.. Score: -0.4627407491207123
Hypothesis 2: How is Mozilla going to handle and be with this? Thank you and Q.. Score: -0.4789799749851227
Hypothesis 3: How is Mozilla going to handle and be with this? Thank you, and cute.. Score: -0.48414239287376404
Hypothesis 4: How is Mozilla going to handle and be with this? Thank you and cute.. Score: -0.4972183108329773
Hypothesis 5: How is Mozilla going to handle and be with this? Thank you, and Q.. Score: -0.5054414868354797
```
transformers v4.44.1 + My Fix from #32970
```
Hypothesis 1: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495038032531738
Hypothesis 2: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495040416717529
Hypothesis 3: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495036840438843
Hypothesis 4: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495036244392395
Hypothesis 5: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495033264160156
```
@ylacombe has found the bug in [_expand_variables_for_generation](https://github.com/huggingface/transformers/blob/516ee6adc2a6ac2f4800790cabaad66a1cb4dcf4/src/transformers/models/whisper/generation_whisper.py#L1076-L1084) function.
The function artificially expands the batch size to `num_return_sequences`, which causes an issue when this expanded batch size is passed to `GenerationMixin.generate`. Specifically, if `batch_size=5` and `num_return_sequences > 1`, the model generates `batch_size * num_beams` beams but retains only the most probable beam for each element of the original batch.
## Impact
This bug results in the `num_return_sequences` parameter not being compatible with both short-form and long-form generation. Users expecting multiple return sequences will only receive the most probable sequence, which may not meet the intended use case.
cc @eustlb | bug,Generation,Audio | low | Critical |
2,522,025,994 | godot | Can't parse execute_with_pipe stdio data | ### Tested versions
v4.4.dev2.mono.official [97ef3c837]
### System information
Godot v4.4.dev2.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 5GB (NVIDIA; 31.0.15.3623) - AMD Ryzen 9 3900X 12-Core Processor (24 Threads)
### Issue description
I'm try get execute_with_pipe's output.
```gdscript
extends Control
var pipe : FileAccess
var thread : Thread
func _ready() -> void:
var dic = OS.execute_with_pipe("cmd",["/c", "ping 127.0.0.1"])
pipe = dic.stdio
prints(pipe)
thread = Thread.new()
thread.start(_thread_func)
get_window().close_requested.connect(clean_func)
func _thread_func():
while pipe.is_open() and pipe.get_error()==OK:
_add_char.call_deferred(pipe.get_8())
prints("all done", pipe.is_open(), pipe.get_error())
var data := PackedByteArray()
func _add_char(c):
data.append(c)
$TextEdit.text = data.get_string_from_utf8()
prints("ccc",c,Engine.get_frames_drawn())
func clean_func():
pipe.close()
thread.wait_to_finish()
func _on_line_edit_text_submitted(new_text: String) -> void:
var cmd :String = new_text+"\n"
var buffer = cmd.to_utf8_buffer()
pipe.store_buffer(buffer)
$LineEdit.clear()
```
but I can't get correct data.


### Steps to reproduce
/
### Minimal reproduction project (MRP)
[CmdTest.zip](https://github.com/user-attachments/files/16977745/CmdTest.zip)
| platform:windows,needs testing | low | Critical |
2,522,031,864 | godot | Scaled 2d collisions break sliding through `move_and_slide()` | ### Tested versions
- Reproducible in 4.2 and 4.3
### System information
Godot v4.3.stable - macOS 14.6.1 - Vulkan (Mobile) - integrated Apple M3 Max - Apple M3 Max (14 Threads)
### Issue description
I'm making a tower defense game ([Rift Riff](https://store.steampowered.com/app/2800900/Rift_Riff/)) which is a non-tile-based isometric game. To match object collisions with the isometric graphics, I regularly use `CollisionShape2D` with it's shape set to a `ShapeCircle2D` and set the `CollisionShape2D`s transform scale to `1.0, 0.7` so that the collision and graphics match.
However, for entities that move, I've found that this often breaks the sliding behavior in `move_and_slide()`. As scaling the collision works, both technically in the game and visually in the editor, and doesn't give any errors, I believe this is a bug and not unsupported behavior. If this is unsupported behavior (which would be very undesirable for my game) then perhaps the transform scaling property of `CollisionShape2D`s should be read only or perhaps show a warning when scale is adjusted.
An example from within my game:
https://github.com/user-attachments/assets/412e0806-8bdd-4d01-a774-0b18446da1a5
And here an example when the `CollisionShape2D`s transform scale is simply `1.0, 1.0` (which works flawlessly):
https://github.com/user-attachments/assets/8825bd23-e690-4931-a58c-7c164e8ef744
Here a small vid of the MRP I created to show the faulty behavior:
https://github.com/user-attachments/assets/f578258e-58cd-437f-b593-bdcd076d3894
### Steps to reproduce
Play the MRP and notice how the characters with scaled colliders can get stuck at a place that the normal colliders don't.
### Minimal reproduction project (MRP)
[slide-fail.zip](https://github.com/user-attachments/files/16977720/slide-fail.zip)
| topic:physics,topic:2d | low | Critical |
2,522,034,880 | flutter | Able to set full screen width for MenuAnchor | ### Steps to reproduce
MenuAnchor sets the full screen width, but it is not actually full screen.
### Expected results
MenuAnchor can set full screen width
### Actual results
MenuAnchor is not set to full screen width
### Code sample
<details open><summary>Code sample</summary>
```dart
MenuAnchor(
childFocusNode: _buttonFocusNode,
menuChildren: <Widget>[
MenuItemButton(
child: Text(MenuEntry.about.label),
onPressed: () => _activate(MenuEntry.about),
),
SizedBox(
width:MediaQuery.of(context).size.width,
child:Text("dsfdsf")
),
],
builder:
(BuildContext context, MenuController controller, Widget? child) {
return TextButton(
focusNode: _buttonFocusNode,
onPressed: () {
if (controller.isOpen) {
controller.close();
} else {
controller.open();
}
},
child: const Text('OPEN MENU'),
);
},
),
```
</details>
### Screenshots or Video

### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Minor |
2,522,071,122 | next.js | SCSS Url Static Assets Failing to load | ### Link to the code that reproduces this issue
https://github.com/theblondealex/scss-reproduction
### To Reproduce
1. Clone the repo
2. Install dependencies
3. Run the dev server
4. Open the browser at http://localhost:3000
5. Go to the `src/app/page.tsx` file
6. Comment out the import of the css file and uncomment the import of the scss file
7. You will notice that the svg is **not** rendered when using the scss file but it is rendered when using the css file
### Current vs. Expected behavior
You will see that if the react-example is ran, the scss file is interpreted and works correctly.
Expected is that nextjs should render the url(svgs) correctly
SCSS is installed correctly as the styles are applied just not the svg loaded
Normal image works correctly in the url it is specific to .svg#ID sprite svgs
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #40~22.04.3-Ubuntu SMP PREEMPT_DYNAMIC Tue Jul 30 17:30:19 UTC 2
Available memory (MB): 31546
Available CPU cores: 14
Binaries:
Node: 22.4.0
npm: 10.8.1
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 14.2.10 // Latest available version is detected (14.2.10).
eslint-config-next: 14.2.10
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure, Middleware, Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
_No response_ | bug,Middleware,Output (export/standalone) | low | Minor |
2,522,072,371 | PowerToys | Keyboard Manager: Remap shortcut to Send Text, won't allow U+FFFD. | ### Microsoft PowerToys version
0.84.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I am trying to add a "Send Text" remap which has U+FFFD, (� REPLACEMENT CHARACTER) and it will not keep it. It will go in the box but is not saved.
### ✔️ Expected Behavior
I want to enter this:

### ❌ Actual Behavior
I get this after save and re-open, U+FFFD is missing.

### Other Software
_No response_ | Issue-Bug,Product-Keyboard Shortcut Manager,Area-Quality,Needs-Triage | low | Minor |
2,522,095,511 | pytorch | bmm, topk, cholesky, linalg.norm, max with out variants set causing recompilations in torch.compile | ### 🐛 Describe the bug
Out variants of following ops are causing extra recompilations (in 3rd iteration) in torch.compile when compared to not using out variant,
torch.bmm
torch.topk
torch.cholesky
torch.linalg.norm
torch.max
### Error logs
```
I0912 10:49:41.653000 29510 torch/_dynamo/logging.py:57] [0/0] Step 1: torchdynamo start tracing topk_func <ipython-input-11-cf34f6cf9be7>:9
I0912 10:49:41.684000 29510 torch/_dynamo/logging.py:57] [0/0] Step 1: torchdynamo done tracing topk_func (RETURN_VALUE)
I0912 10:49:41.692000 29510 torch/_dynamo/logging.py:57] [0/0] Step 2: calling compiler function inductor
I0912 10:49:41.787000 29510 torch/_dynamo/logging.py:57] [0/0] Step 2: done compiler function inductor
I0912 10:49:41.812000 29510 torch/fx/experimental/symbolic_shapes.py:3646] [0/0] produce_guards
I0912 10:49:41.833000 29510 torch/_dynamo/logging.py:57] [0/1] Step 1: torchdynamo start tracing topk_func <ipython-input-11-cf34f6cf9be7>:9
I0912 10:49:41.872000 29510 torch/fx/experimental/symbolic_shapes.py:3557] [0/1] create_symbol s0 = 7 for L['input'].size()[0] [2, int_oo] at <ipython-input-11-cf34f6cf9be7>:10 in topk_func (_dynamo/variables/builder.py:2711 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
I0912 10:49:41.895000 29510 torch/fx/experimental/symbolic_shapes.py:4857] [0/1] set_replacement s0 = 7 (range_refined_to_singleton) VR[7, 7]
I0912 10:49:41.902000 29510 torch/fx/experimental/symbolic_shapes.py:5106] [0/1] eval Eq(s0, 7) [guard added] at <ipython-input-11-cf34f6cf9be7>:10 in topk_func (utils/_stats.py:21 in wrapper), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s0, 7)"
I0912 10:49:41.908000 29510 torch/_dynamo/logging.py:57] [0/1] Step 1: torchdynamo done tracing topk_func (RETURN_VALUE)
I0912 10:49:41.915000 29510 torch/_dynamo/logging.py:57] [0/1] Step 2: calling compiler function inductor
Iter 1: No of recompiles: 0
I0912 10:49:42.168000 29510 torch/_dynamo/logging.py:57] [0/1] Step 2: done compiler function inductor
I0912 10:49:42.199000 29510 torch/fx/experimental/symbolic_shapes.py:3646] [0/1] produce_guards
I0912 10:49:42.234000 29510 torch/_dynamo/logging.py:57] [0/2] Step 1: torchdynamo start tracing topk_func <ipython-input-11-cf34f6cf9be7>:9
I0912 10:49:42.277000 29510 torch/fx/experimental/symbolic_shapes.py:3557] [0/2] create_symbol s0 = 9 for L['input'].size()[0] [2, int_oo] at <ipython-input-11-cf34f6cf9be7>:10 in topk_func (_dynamo/variables/builder.py:2711 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
I0912 10:49:42.295000 29510 torch/fx/experimental/symbolic_shapes.py:4857] [0/2] set_replacement s0 = 9 (range_refined_to_singleton) VR[9, 9]
I0912 10:49:42.297000 29510 torch/fx/experimental/symbolic_shapes.py:5106] [0/2] eval Eq(s0, 9) [guard added] at <ipython-input-11-cf34f6cf9be7>:10 in topk_func (utils/_stats.py:21 in wrapper), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s0, 9)"
I0912 10:49:42.308000 29510 torch/_dynamo/logging.py:57] [0/2] Step 1: torchdynamo done tracing topk_func (RETURN_VALUE)
I0912 10:49:42.328000 29510 torch/_dynamo/logging.py:57] [0/2] Step 2: calling compiler function inductor
Iter 2: No of recompiles: 1
I0912 10:49:42.508000 29510 torch/_dynamo/logging.py:57] [0/2] Step 2: done compiler function inductor
I0912 10:49:42.522000 29510 torch/fx/experimental/symbolic_shapes.py:3646] [0/2] produce_guards
Iter 3: No of recompiles: 2
```
### Minified repro
torch.topk
```
import torch
def get_num_torch_recompiles():
guard_failures = torch._dynamo.utils.guard_failures
num_recompiles = [len(guard_failures[code]) for code in guard_failures]
return 0 if len(num_recompiles) == 0 else max(num_recompiles)
total_comps = 0
def topk_func(input, k, out):
torch.topk(input, k, out=out)
torch._dynamo.reset()
opt_model = torch.compile(topk_func)
values = torch.empty(3)
indices = torch.empty(3, dtype=torch.long)
x = torch.arange(1., 6.)
opt_model(x, 3, out=(values, indices))
print(f"Iter 1: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
x = torch.arange(1., 8.)
opt_model(x, 3, out=(values, indices))
print(f"Iter 2: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
x = torch.arange(1., 10.)
opt_model(x, 3, out=(values, indices))
print(f"Iter 3: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
```
torch.bmm
```
import torch
def get_num_torch_recompiles():
guard_failures = torch._dynamo.utils.guard_failures
num_recompiles = [len(guard_failures[code]) for code in guard_failures]
return 0 if len(num_recompiles) == 0 else max(num_recompiles)
total_comps = 0
def bmm_func(input, mat, out):
torch.bmm(input, mat, out=out)
torch._dynamo.reset()
opt_model = torch.compile(bmm_func)
input1 = torch.randn(10, 3, 4)
mat1 = torch.randn(10, 4, 5)
out1 = torch.empty(10, 3, 5)
opt_model(input1, mat1, out=out1)
print(f"Iter 1: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
input2 = torch.randn(12, 5, 6)
mat2 = torch.randn(12, 6, 7)
out2 = torch.empty(12, 5, 7)
opt_model(input2, mat2, out=out2)
print(f"Iter 2: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
input3 = torch.randn(14, 7, 8)
mat3 = torch.randn(14, 8, 9)
out3 = torch.empty(14, 7, 9)
opt_model(input3, mat3, out=out3)
print(f"Iter 3: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
```
torch.cholesky
```
import torch
def get_num_torch_recompiles():
guard_failures = torch._dynamo.utils.guard_failures
num_recompiles = [len(guard_failures[code]) for code in guard_failures]
return 0 if len(num_recompiles) == 0 else max(num_recompiles)
total_comps = 0
def cholesky_func(input, out):
torch.linalg.cholesky(input, out=out)
torch._dynamo.reset()
opt_model = torch.compile(cholesky_func)
values = torch.randn(8, 32, 32)
ifm_positive_definite = values @ values.mT + torch.eye(values.shape[-1])
opt_model(ifm_positive_definite, out=ifm_positive_definite)
print(f"Iter 1: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
values = torch.randn(9, 32, 32)
ifm_positive_definite = values @ values.mT + torch.eye(values.shape[-1])
opt_model(ifm_positive_definite, out=ifm_positive_definite)
print(f"Iter 2: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
values = torch.randn(10, 32, 32)
ifm_positive_definite = values @ values.mT + torch.eye(values.shape[-1])
opt_model(ifm_positive_definite, out=ifm_positive_definite)
print(f"Iter 3: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
```
torch.linalg.norm
```
import torch
def get_num_torch_recompiles():
guard_failures = torch._dynamo.utils.guard_failures
num_recompiles = [len(guard_failures[code]) for code in guard_failures]
return 0 if len(num_recompiles) == 0 else max(num_recompiles)
total_comps = 0
def model_norm(inputs, out):
return torch.linalg.norm(inputs, ord=-1, dim=(0, 1), out=out)
torch._dynamo.reset()
opt_model = torch.compile(model_norm)
a = torch.rand((16, 16, 550), dtype=torch.bfloat16)
out = torch.empty(550, dtype=torch.bfloat16)
opt_model(a, out)
print(f"Iter 1: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
a = torch.rand((16, 16, 32), dtype=torch.bfloat16)
out = torch.empty(32, dtype=torch.bfloat16)
opt_model(a, out)
print(f"Iter 2: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
a = torch.rand((16, 16, 341), dtype=torch.bfloat16)
out = torch.empty(341, dtype=torch.bfloat16)
opt_model(a, out)
print(f"Iter 3: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
a = torch.rand((16, 16, 402), dtype=torch.bfloat16)
out = torch.empty(402, dtype=torch.bfloat16)
opt_model(a, out)
print(f"Iter 4: No of recompiles: {get_num_torch_recompiles() - total_comps}")
total_comps = get_num_torch_recompiles()
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0.dev20240911+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] torch==2.5.0.dev20240911+cpu
[pip3] torchaudio==2.4.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.19.0+cu121
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu | good first issue,triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,522,105,213 | pytorch | Lobpcg for complex gives wrong results. | ### 🐛 Describe the bug
As the following example, I create a complex hermitian matrix a and a vector x first, then call lobpcg in pytorch and that in scipy, at last, eigen exact solver is invoked. Obviously, scipy lobpcg give the expected result but pytorch lobpcg does not.
```test.py
import torch
import scipy
torch.manual_seed(42)
d = 5
a = torch.randn(d, d, dtype=torch.complex128)
a = a.H + a
x = torch.randn(d, 1, dtype=torch.complex128)
print(a, x)
print(torch.lobpcg(a, X=x))
print(scipy.sparse.linalg.lobpcg(a.numpy(), x.numpy()))
print(torch.linalg.eigh(a))
```
```output.log
tensor([[ 0.4236+0.0000j, 0.5033-0.3287j, 0.4483-0.1792j, -0.6542-1.8410j,
-0.1899+0.8263j],
[ 0.5033+0.3287j, -1.4446+0.0000j, 1.2191-1.1183j, 0.7611+0.6195j,
0.1108-0.3507j],
[ 0.4483+0.1792j, 1.2191+1.1183j, 0.1188+0.0000j, -1.1936+0.7518j,
0.2993+1.7354j],
[-0.6542+1.8410j, 0.7611-0.6195j, -1.1936-0.7518j, 0.6263+0.0000j,
0.8081+0.1312j],
[-0.1899-0.8263j, 0.1108+0.3507j, 0.2993-1.7354j, 0.8081-0.1312j,
0.8176+0.0000j]], dtype=torch.complex128) tensor([[-1.1505-0.9865j],
[-0.1688-0.3571j],
[-1.7502-0.6587j],
[-0.0944+0.2415j],
[-0.0506-0.0643j]], dtype=torch.complex128)
(tensor([1002710.1510+0.j], dtype=torch.complex128), tensor([[ 110.2221+269.0265j],
[-188.2007+179.2590j],
[ 296.8929+139.1052j],
[ 262.9413-88.3040j],
[ -50.8417+275.2815j]], dtype=torch.complex128))
(array([3.55880664]), array([[-0.00702178+0.29407347j],
[ 0.12385841-0.18879456j],
[ 0.54458674+0.13835477j],
[-0.31491075-0.4613058j ],
[ 0.15042302-0.46062081j]]))
torch.return_types.linalg_eigh(
eigenvalues=tensor([-3.3956, -1.9695, 0.1831, 2.1651, 3.5588], dtype=torch.float64),
eigenvectors=tensor([[ 0.2714+0.0000j, 0.5647-0.0000j, 0.2598+0.0000j, -0.6733+0.0000j,
0.2942+0.0000j],
[-0.4423+0.4263j, -0.0485-0.5086j, 0.3862+0.3237j, -0.1537-0.1820j,
-0.1917-0.1193j],
[ 0.4172-0.2703j, -0.4410-0.0812j, 0.0061+0.3763j, -0.1446-0.2712j,
0.1253-0.5477j],
[ 0.2259-0.3479j, 0.0792-0.4511j, -0.1122-0.2507j, -0.0840-0.4730j,
-0.4537+0.3258j],
[ 0.1085+0.3436j, 0.0726+0.0624j, -0.6481+0.2036j, -0.3482+0.2085j,
-0.4641-0.1394j]], dtype=torch.complex128))
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0.dev20240911+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.3
Libc version: glibc-2.40
Python version: 3.12.5 (main, Aug 9 2024, 08:20:41) [GCC 14.2.1 20240805] (64-bit runtime)
Python platform: Linux-6.6.49-1-lts-x86_64-with-glibc2.40
Is CUDA available: False
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A Is XNNPACK available: True CPU:
Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 72%
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 4993.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.11.2
[pip3] mypy_extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.5.0.dev20240911+cpu
[pip3] torch_geometric==2.5.3
[pip3] torchaudio==2.5.0.dev20240911+cpu
[pip3] torchvision==0.20.0.dev20240911+cpu
[conda] Could not collect
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @amjames @jianyuh @pearu @walterddr @xwang233 | triaged,module: complex,module: linear algebra,module: correctness (silent) | low | Critical |
2,522,120,515 | flutter | FlutterEngineSendKeyEvent function didn't work in flutter embedder of glfw | ### Steps to reproduce
1. Uses the FlutterEmbedderGLFW demo
2. call glfwSetKeyCallback with a callback function, and send the key event with FlutterEngineSendKeyEvent
### Expected results
The key event should be sent and the text input should contains contents
### Actual results
The text input is empty
### Code sample
<details open><summary>Code sample</summary>
```cpp
// Copyright 2013 The Flutter Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include <cassert>
#include <chrono>
#include <iostream>
#include <GLFW/glfw3.h>
#include <flutter_embedder.h>
// This value is calculated after the window is created.
static double g_pixelRatio = 1.0;
static const size_t kInitialWindowWidth = 800;
static const size_t kInitialWindowHeight = 600;
static constexpr FlutterViewId kImplicitViewId = 0;
static_assert(FLUTTER_ENGINE_VERSION == 1,
"This Flutter Embedder was authored against the stable Flutter "
"API at version 1. There has been a serious breakage in the "
"API. Please read the ChangeLog and take appropriate action "
"before updating this assertion");
void GLFWcursorPositionCallbackAtPhase(GLFWwindow* window,
FlutterPointerPhase phase,
double x,
double y) {
FlutterPointerEvent event = {};
event.struct_size = sizeof(event);
event.phase = phase;
event.x = x * g_pixelRatio;
event.y = y * g_pixelRatio;
event.timestamp =
std::chrono::duration_cast<std::chrono::microseconds>(
std::chrono::high_resolution_clock::now().time_since_epoch())
.count();
// This example only supports a single window, therefore we assume the pointer
// event occurred in the only view, the implicit view.
event.view_id = kImplicitViewId;
FlutterEngineSendPointerEvent(
reinterpret_cast<FlutterEngine>(glfwGetWindowUserPointer(window)), &event,
1);
}
void GLFWcursorPositionCallback(GLFWwindow* window, double x, double y) {
GLFWcursorPositionCallbackAtPhase(window, FlutterPointerPhase::kMove, x, y);
}
void GLFWmouseButtonCallback(GLFWwindow* window,
int key,
int action,
int mods) {
if (key == GLFW_MOUSE_BUTTON_1 && action == GLFW_PRESS) {
double x, y;
glfwGetCursorPos(window, &x, &y);
GLFWcursorPositionCallbackAtPhase(window, FlutterPointerPhase::kDown, x, y);
glfwSetCursorPosCallback(window, GLFWcursorPositionCallback);
}
if (key == GLFW_MOUSE_BUTTON_1 && action == GLFW_RELEASE) {
double x, y;
glfwGetCursorPos(window, &x, &y);
GLFWcursorPositionCallbackAtPhase(window, FlutterPointerPhase::kUp, x, y);
glfwSetCursorPosCallback(window, nullptr);
}
}
static void GLFWKeyCallback(GLFWwindow* window,
int key,
int scancode,
int action,
int mods) {
if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS) {
glfwSetWindowShouldClose(window, GLFW_TRUE);
}
FlutterKeyEventType type;
switch (action) {
case 0:
type = kFlutterKeyEventTypeUp;
break;
case 1:
type = kFlutterKeyEventTypeDown;
break;
case 2:
type = kFlutterKeyEventTypeRepeat;
break;
}
FlutterKeyEvent event {};
event.struct_size = sizeof(FlutterKeyEvent);
event.timestamp = std::chrono::duration_cast<std::chrono::microseconds>(
std::chrono::high_resolution_clock::now().time_since_epoch())
.count();
event.type = type;
event.physical = scancode;
event.logical = key;
event.character = glfwGetKeyName(key, scancode);
event.synthesized = true;
event.device_type = kFlutterKeyEventDeviceTypeKeyboard;
FlutterEngineResult r = FlutterEngineSendKeyEvent(
reinterpret_cast<FlutterEngine>(glfwGetWindowUserPointer(window)),
&event,
NULL,
NULL
);
std::cout << r << std::endl;
std::cout << key << std::endl;
std::cout << scancode << std::endl;
std::cout << action << std::endl;
std::cout << mods << std::endl;
std::cout << "----------------------------------" << std::endl;
}
void GLFWwindowSizeCallback(GLFWwindow* window, int width, int height) {
FlutterWindowMetricsEvent event = {};
event.struct_size = sizeof(event);
event.width = width * g_pixelRatio;
event.height = height * g_pixelRatio;
event.pixel_ratio = g_pixelRatio;
event.view_id = kImplicitViewId;
FlutterEngineSendWindowMetricsEvent(
reinterpret_cast<FlutterEngine>(glfwGetWindowUserPointer(window)),
&event);
}
bool RunFlutter(GLFWwindow* window,
const std::string& project_path,
const std::string& icudtl_path) {
FlutterRendererConfig config = {};
config.type = kOpenGL;
config.open_gl.struct_size = sizeof(config.open_gl);
config.open_gl.make_current = [](void* userdata) -> bool {
glfwMakeContextCurrent(static_cast<GLFWwindow*>(userdata));
return true;
};
config.open_gl.clear_current = [](void*) -> bool {
glfwMakeContextCurrent(nullptr); // is this even a thing?
return true;
};
config.open_gl.present = [](void* userdata) -> bool {
glfwSwapBuffers(static_cast<GLFWwindow*>(userdata));
return true;
};
config.open_gl.fbo_callback = [](void*) -> uint32_t {
return 0; // FBO0
};
config.open_gl.gl_proc_resolver = [](void*, const char* name) -> void* {
return reinterpret_cast<void*>(glfwGetProcAddress(name));
};
// This directory is generated by `flutter build bundle`.
std::string assets_path = project_path + "/build/flutter_assets";
FlutterProjectArgs args = {
.struct_size = sizeof(FlutterProjectArgs),
.assets_path = assets_path.c_str(),
.icu_data_path =
icudtl_path.c_str(), // Find this in your bin/cache directory.
};
FlutterEngine engine = nullptr;
FlutterEngineResult result =
FlutterEngineRun(FLUTTER_ENGINE_VERSION, &config, // renderer
&args, window, &engine);
if (result != kSuccess || engine == nullptr) {
std::cout << "Could not run the Flutter Engine." << std::endl;
return false;
}
glfwSetWindowUserPointer(window, engine);
GLFWwindowSizeCallback(window, kInitialWindowWidth, kInitialWindowHeight);
return true;
}
void GLFW_ErrorCallback(int error, const char* description) {
std::cout << "GLFW Error: (" << error << ") " << description << std::endl;
}
int main(int argc, const char* argv[]) {
std::string project_path = "./demo";
#ifdef _WIN32
std::string icudtl_path = "./data/windows/icudtl.dat";
#else
std::string icudtl_path = "./data/linux/icudtl.dat";
#endif
glfwSetErrorCallback(GLFW_ErrorCallback);
int result = glfwInit();
if (result != GLFW_TRUE) {
std::cout << "Could not initialize GLFW." << std::endl;
return EXIT_FAILURE;
}
glfwWindowHint(GLFW_CONTEXT_CREATION_API, GLFW_EGL_CONTEXT_API);
glfwWindowHint(GLFW_TRANSPARENT_FRAMEBUFFER, GLFW_TRUE);
GLFWwindow* window = glfwCreateWindow(
kInitialWindowWidth, kInitialWindowHeight, "Flutter", NULL, NULL);
if (window == nullptr) {
std::cout << "Could not create GLFW window." << std::endl;
return EXIT_FAILURE;
}
int framebuffer_width, framebuffer_height;
glfwGetFramebufferSize(window, &framebuffer_width, &framebuffer_height);
g_pixelRatio = framebuffer_width / kInitialWindowWidth;
bool run_result = RunFlutter(window, project_path, icudtl_path);
if (!run_result) {
std::cout << "Could not run the Flutter engine." << std::endl;
return EXIT_FAILURE;
}
glfwSetKeyCallback(window, GLFWKeyCallback);
glfwSetWindowSizeCallback(window, GLFWwindowSizeCallback);
glfwSetMouseButtonCallback(window, GLFWmouseButtonCallback);
while (!glfwWindowShouldClose(window)) {
glfwWaitEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
return EXIT_SUCCESS;
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/bcf2dc69-ebde-4fc3-bc72-759de4c7cf4f
</details>
### Logs
<details open><summary>Logs</summary>
```console
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.2, on Arch Linux 6.10.9-zen1-2-zen, locale zh_CN.UTF-8)
• Flutter version 3.24.2 on channel stable at /home/coder2/flutter/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4cf269e36d (9 天前), 2024-09-03 14:30:00 -0700
• Engine revision a6bd3f1de1
• Dart version 3.5.2
• DevTools version 2.37.2
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
• clang version 18.1.8
• cmake version 3.30.3
• ninja version 1.12.1
• pkg-config version 2.1.1
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
[✓] IntelliJ IDEA Community Edition (version 2024.1)
• IntelliJ at /home/coder2/idea-IC-241.14494.240
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[!] Proxy Configuration
• HTTP_PROXY is set
! NO_PROXY is not set
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-x64 • Arch Linux 6.10.9-zen1-2-zen
[✓] Network resources
• All expected network resources are available.
(The demo is used for embedding into Minecraft, so I didn't install the android and chrome toolchains)
```
</details>
| a: text input,engine,e: embedder,a: desktop,P3,team-engine,triaged-engine | low | Critical |
2,522,154,273 | puppeteer | [Feature]: Add unpacking indicator to the install command | ### Feature description
Currently we show a downloading indicator in the terminal when a browser is downloading, but after a successful download we try to unpack and do other operations which are not communicated to the user, which looks like the process is stuck, we should show some loading state there to make sure the user understands what is going on.
| feature,good first issue,@puppeteer/browsers,P3 | low | Minor |
2,522,157,000 | godot | Editor, save script file: Triple quote string: spaces are converted into tabs | ### Tested versions
v4.5.stable.official
### System information
Windows 11
### Issue description
When saving a GDScript in the editor, spaces are converted into tabs. That's fine, except it does so inside a triple quote string too, if the lines in question immediately start with a space.
This prevents the user from formatting the triple quote string as he prefers.
This is immediately visible because of how tabs are rendered in the editor (the "right arrow" symbol)
### Steps to reproduce
Example:
```
var triplequotestring = """Hello, world!
Hello,
World!
"""
```
The spaces in the last two lines will be converted to tabs when saving the file.
| bug,discussion,topic:editor | low | Minor |
2,522,209,703 | ollama | Add Tokenizer functionality to API | Having access to the models tokenizer is extremely useful for counting tokens, and managing the context window. In a lot of cases its essential to get an LLM implementation to work properly. The model already has the tokenizer loaded, and ollama's backend, llama.cpp, already has an interface for the tokenizer, so it shouldn't be that difficult to implement into the API.
Unless this is already a functionality on the API, in which case I'm sorry, but I just didn't see it in the documentation. | feature request,api | low | Minor |
2,522,232,689 | go | x/time/rate: wrong number of tokens restored in Reservation.CancelAt | ### Go version
go version go1.23.0 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='arm64'
GOBIN='/Users/bytedance/go_repos/bin'
GOCACHE='/Users/bytedance/Library/Caches/go-build'
GOENV='/Users/bytedance/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/bytedance/go_repos/pkg/mod'
GONOPROXY='*.byted.org,*.everphoto.cn,git.smartisan.com'
GONOSUMDB='*.byted.org,*.everphoto.cn,git.smartisan.com'
GOOS='darwin'
GOPATH='/Users/bytedance/go_repos'
GOPRIVATE='*.byted.org,*.everphoto.cn,git.smartisan.com'
GOPROXY='https://goproxy.cn,direct'
GOROOT='/opt/homebrew/opt/go/libexec'
GOSUMDB='sum.golang.google.cn'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/opt/go/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/bytedance/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/Users/bytedance/Test/go_demo/time/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/k9/cdmv9r354_l13tvkrgn09f2m0000gn/T/go-build1012911903=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I suspect the following line contains wrong logic.
```
// calculate tokens to restore
// The duration between lim.lastEvent and r.timeToAct tells us how many tokens were reserved
// after r was obtained. These tokens should not be restored.
restoreTokens := float64(r.tokens) - r.limit.tokensFromDuration(r.lim.lastEvent.Sub(r.timeToAct))
```
I think there's no need to subtract tokens reserved after r was obtained, because these tokens are already subtracted at the time their reservation were made in `reserveN`.
[This stackoverflow post](https://stackoverflow.com/questions/70993567/rate-limiting-cancellation-token-restore) resonates with my idea.
To verify this, I write a test case performing the following scenario:
Say we have an initially full limiter with tokens=burst=10 being refilled at rate=1.
- UserA reserves 10 tokens at time=0s. Now tokens=0.
- UserB reserves 1 token at time=0s and be arranged to act at time=1/1=1s. Now tokens=-1.
- UserA cancels immediately at time=0s. By existing code, only 10-1=9 tokens would be restored, resulting in actual tokens=-1+9=8.
At time=1s, we have only one task worth 1 token to act, and 8 tokens remaining. There's 1 token missing compared to burst=10.
My test code are attached here:
```
func printLim(lim *rate.Limiter, originTime time.Time) {
fmt.Printf("tokens: %+v\n", lim.TokensAt(originTime))
}
func TestReserve() {
lim := rate.NewLimiter(1, 10)
originTime := time.Now()
printLim(lim, originTime)
r0 := lim.ReserveN(originTime, 10)
printLim(lim, originTime)
_ = lim.ReserveN(originTime, 1)
printLim(lim, originTime)
r0.CancelAt(originTime)
printLim(lim, originTime)
}
```
### What did you see happen?
The test result is:
```
tokens: 10
tokens: 0
tokens: -1
tokens: 8
```
### What did you expect to see?
```
tokens: 10
tokens: 0
tokens: -1
tokens: 9
```
There should be 9 tokens remaining at the last line.
(Edited the test code to focus on tokens.) | ExpertNeeded,NeedsInvestigation | low | Critical |
2,522,280,289 | PowerToys | Support Google Docs in Peek | ### Description of the new feature / enhancement
Add support for quick viewing using Peek of documents in Google Docs / Google Docs / Google Sheets / Google Slides formats in case of locally connected [Google Drive](https://drive.google.com).
### Scenario when this would be used?
The document name and folder structure are not enough to accurately identify the document, or the document is mistakenly placed in the wrong place, or it is given a name that is not the one it should be given, or you need to quickly assess the differences between two possible copies of the document. Currently, you have to open them in a browser to do this. Neither the thumbnail display nor the document preview in the Windows File Explorer sidebar works.
### Supporting information
File extensions: `gdoc`, `gdraw`, `gform`, `gmap`, `gsheet`, `gsite`, and `gslides`. | Idea-Enhancement,Product-Peek | low | Minor |
2,522,311,048 | flutter | Crash!When Flutter Linux is reduced or maximized in the window, Crash occurs. | ### Steps to reproduce
maximized or maximized
### Expected results
The flutter desktop window is on Linux. If you click on the window to zoom in or out, there is a certain probability that it will crash.
### Actual results
The flutter desktop window is on Linux. If you click on the window to zoom in or out, there is a certain probability that it will crash.
### Code sample
flutter example code
### Screenshots or Video
https://github.com/user-attachments/assets/1a2168b0-ade3-451c-8ca8-720c951e4fab
### Logs
Segmentation fault (core dumped)
### Flutter Doctor output
[✓] Flutter (Channel main, 3.26.0-1.0.pre.97, on UnionTech OS Desktop 20 Home 5.10.0-amd64-desktop, locale zh_CN.UTF-8)
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
[!] Android Studio (not installed)
[✓] IntelliJ IDEA Ultimate Edition (version 2023.1)
[✓] Connected device (1 available)
[✓] Network resources
! Doctor found issues in 3 categories.
| c: crash,platform-linux,a: desktop,P2,needs repro info,team-linux,triaged-linux | low | Critical |
2,522,325,452 | rust | using `super` in doctests errors | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
/// ```
/// type Y = ();
/// mod m {
/// use super::Y;
/// fn f() -> Y {}
/// }
/// ```
pub struct X;
```
in `lib.rs`.
I expected to see this happen: successful `cargo test`
Instead, this happened:
```text
cargo test
Compiling rust v0.0.0 (/tmp/tmp)
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.13s
Running unittests src/lib.rs (target/debug/deps/rust-48a0867617020c3e)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests rust
running 1 test
test src/lib.rs - X (line 1) ... FAILED
failures:
---- src/lib.rs - X (line 1) stdout ----
error[E0432]: unresolved import `super::Y`
--> src/lib.rs:4:9
|
5 | use super::Y;
| ^^^^^^^^ no `Y` in the root
error: aborting due to 1 previous error
For more information about this error, try `rustc --explain E0432`.
Couldn't compile the test.
failures:
src/lib.rs - X (line 1)
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.02s
error: doctest failed, to rerun pass `--doc`
```
The same error happens when using `super::Y` directly in the return type.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (8d6b88b16 2024-09-11)
binary: rustc
commit-hash: 8d6b88b168e45ee1624699c19443c49665322a91
commit-date: 2024-09-11
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
``` | T-rustdoc,A-diagnostics,A-resolve,A-doctests,D-terse | low | Critical |
2,522,356,138 | godot | FBX linked mesh import with wrong origin [4.4.dev2] | ### Tested versions
- Reproducible in: 4.4.dev2
### System information
Windows 10 - v4.4.dev2.official [97ef3c837] - GLES 3.0.0 Angle (Compatibility) [Inteal R HD Graphics (0x00000102) Direct3D 11 vs__4_1 ps_4_1, D3D11-9.17.10.4459]]
### Issue description

(Left: .OBJ) (Right: .fbx)
When importing a model in FBX format that has a linked mesh, it only imports the original.

### Steps to reproduce
Just import a model in fbx format with some meshes linked into the project.
### Minimal reproduction project (MRP)
N/A | needs testing,topic:import | medium | Major |
2,522,363,440 | langchain | OpenAI AzureChatOpenAI doesn't support the new structured output capability even though BaseChatOpenAI does | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code doesn't work:
```python
class Person(BaseModel):
first_name: str
last_name: str
LLM = AzureChatOpenAI(
azure_deployment="gpt-4o-2024-08-06", api_version="2024-08-01-preview", temperature=0
).with_structured_output(schema=Person, method="json_schema", strict=True)
result = LLM.invoke(HumanMessage("Who was the first president of the United States?"))
```
### Error Message and Stack Trace (if applicable)
The exception received was:
```
Received unsupported arguments {'strict': True}
File "/home/goldbermg3/GitProjects/remis-pdf-utils/structured_out_test.py", line 13, in <module>
).with_structured_output(schema=Person, method="json_schema", strict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Received unsupported arguments {'strict': True}
```
### Description
I expected the new Structured Output capability (which is [available through Azure](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/structured-outputs?tabs=python-secure)) which is implemented on `BaseChatOpenAI` to work using `AzureChatOpenAI`. However, after browsing the code, I saw that the support for `mode="json_schema"` and the `strict` argument is implemented only for `BaseChatOpenAI`; `AzureChatOpenAI` overrides `BaseChatOpenAI.with_structured_outputs` and does not include support for those arguments.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.35
> langchain: 0.2.15
> langsmith: 0.1.104
> langchain_openai: 0.1.23
> langchain_text_splitters: 0.2.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: Installed. No version info available.
> httpx: 0.27.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.42.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.32
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2 | 🤖:bug,investigate | low | Critical |
2,522,476,371 | go | proposal: go/types: add Hash function | [Edit: this proposal is now just for the types.Hash function. The HashMap is [another proposal](https://github.com/golang/go/issues/69559).]
We propose to add the HashMap data type (a generic evolution of [golang.org/x/tools/go/types/typeutil.Map](https://pkg.go.dev/golang.org/x/tools/go/types/typeutil#Map)) to the standard `go/types` package, with the following API:
```go
package types // import "go/types"
// Hash computes a hash value for the given type t such that Identical(x, y) implies Hash(x) == Hash(y).
func Hash(t Type) uint64
```
Rescinded part of the proposal:
```go
// HashMap[V] is a mapping from Type to values of type V.
// Map keys (types) are considered equivalent if they are Identical.
// As with map[K]V, a nil *HashMap is a valid empty map.
type HashMap[V any] struct { ... }
// All returns an iterator over the key/value entries of the map in undefined order.
func (m *HashMap[V]) All() iter.Seq2[Type, V]
// At returns the map entry for the given key. It returns zero if the entry is not present.
func (m *HashMap[V]) At(key Type) V
// Delete removes th//e entry with the given key, if any. It returns true if the entry was found.
func (m *HashMap[V]) Delete(key Type) bool
// Keys returns an iterator over the map keys in undefined order.
func (m *HashMap[V]) Keys() iter.Seq[Type]
// KeysString returns a string representation of the map's key set in unspecified order.
func (m *HashMap[V]) KeysString() string
// Len returns the number of map entries.
func (m *HashMap[V]) Len() int
// Set updates the map entry for key to value, and returns the previous entry, if any.
func (m *HashMap[V]) Set(key Type, value V) (prev V)
// String returns a string representation of the map's entries in unspecified order.
// Values are printed as if by fmt.Sprint.
func (m *HashMap[V]) String() string
```
Some notes:
- This proposal supersedes #67161, to x/tools/go/types/typeutil.Map.
- It is generic, whereas typeutil.Map is not.
- It uses idiomatic Go 1.23 iterators.
- The typeutil.Hasher type and SetHasher method are not part of this proposal. They had [no performance advantage](https://github.com/golang/go/issues/69407), and the statefulness turned reads into writes. The hash recursion bottoms out at named types, so most types are shallow.
- There is still no way to distinguish "m[k] is zero" from "missing"; this has never been a problem in practice.
- The Hash function does not accept a seed and thus HashMap is vulnerable to flooding. But the type checker is vulnerable to various other performance problems if an attacker controls its input. | Proposal,Proposal-Hold | medium | Major |
2,522,524,357 | vscode | Font Rendering Bug Report: Misalignment of Unicode ` ̂` (U+00302) when using the JuliaMono font | **Description:**
There appears to be an issue with the rendering of the Unicode combining circumflex accent (` ̂`, U+00302) in Visual Studio Code (VSCode) when using certain fonts like [JuliaMono](https://github.com/cormullion/juliamono). The accent is misaligned, shifting rightward instead of being positioned directly on top of the base Unicode Latin characters.
**Example Code:**
```julia
ŷ::Int, 𝐲̂::String, 𝐲̂, 𝖸̂, 𝕐̂, 𝐱̂₁³, 𝑥̂, 𝑸̂, 𝒜̂, 𝓐̂, 𝔲̂
```
**Observed Behavior:**
In the example provided, the combining circumflex accent is visibly shifted to the right for characters like `𝐲̂`, `𝐱̂₁³`, `𝑥̂`, and `𝑸̂`, resulting in misalignment. The issue occurs consistently with combining accents above these extended Latin characters.
**Expected Behavior:**
The circumflex accent should be correctly aligned on top of the base Unicode Latin characters in VSCode, just as it renders correctly in other text editors and terminal emulators (e.g., CotEditor, iTerm) when using the JuliaMono font.
**Reproducibility:**
This issue is reproducible in Visual Studio Code with JuliaMono font, but does not occur in other editors or terminal emulators that also support the font.
**Steps to Reproduce:**
1. Apply the JuliaMono font in VSCode using `"editor.fontFamily": "'JuliaMono'"`.
2. Display the provided example code.
3. Observe the misalignment of the circumflex accent (` ̂`) over specific Unicode Latin characters.
**Screenshots:**
<img width="384" alt="image" src="https://github.com/user-attachments/assets/cdee2c42-3e9e-4c4d-b0f7-dd651ee938ad">
**Environment:**
- **Font:** [JuliaMono v0.056](https://github.com/cormullion/juliamono/releases/tag/v0.056)
- **Operating System:** macOS 14.6.1 (23G93)
- **Application:** Visual Studio Code (VSCode) Version: 1.93.0
```
Version: 1.93.0
Commit: 4849ca9bdf9666755eb463db297b69e5385090e3
Date: 2024-09-04T13:02:38.431Z
Electron: 30.4.0
ElectronBuildId: 10073054
Chromium: 124.0.6367.243
Node.js: 20.15.1
V8: 12.4.254.20-electron.0
OS: Darwin arm64 23.6.0
```
**Additional Information:**
See https://github.com/cormullion/juliamono/issues/214 | bug,font-rendering,confirmation-pending | low | Critical |
2,522,621,872 | next.js | Unable to set request headers in middleware.js while returning i18nRouter | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Home
Available memory (MB): 16182
Available CPU cores: 8
Binaries:
Node: 18.17.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.6 // There is a newer version (14.2.10) available, upgrade recommended!
eslint-config-next: 14.2.4
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
⚠ There is a newer version (14.2.10) available, upgrade recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which example does this report relate to?
examples/app-dir-i18n-routing/middleware.ts
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
Using Ngnix for deployment
### Describe the Bug
I want to implement Content Securty Policy with nonces but since I have internationalization applied, I am returning i18nRouter that takes the default request object and i18Config. I want to add CSP and x-nonce among the headers but I am unable to do so. Here is my current middlware.js file:
`import { i18nRouter } from "next-i18n-router";
import i18nConfig from "./i18n.config";
import { NextResponse } from "next/server";
export function middleware(request) {
const nonce = Buffer.from(crypto.randomUUID()).toString("base64");
const cspHeader = `
default-src 'self';
script-src 'self' 'nonce-${nonce}' 'strict-dynamic';
style-src 'self' 'unsafe-inline';
img-src 'self' blob: data:;
font-src 'self';
object-src 'none';
base-uri 'self';
form-action 'self';
frame-src 'self' https://www.google.com https://www.gstatic.com;
frame-ancestors 'none';
upgrade-insecure-requests;
`;
// Replace newline characters and spaces
const contentSecurityPolicyHeaderValue = cspHeader
.replace(/\s{2,}/g, " ")
.trim();
// const requestHeaders = new Headers(request.headers)
// requestHeaders.set('x-nonce', nonce)
// requestHeaders.set(
'Content-Security-Policy',
contentSecurityPolicyHeaderValue
)
// const response = NextResponse.next({
request: {
headers: requestHeaders,
},
})
// response.headers.set(
'Content-Security-Policy',
contentSecurityPolicyHeaderValue
)
request.headers.set("x-nonce", nonce);
request.headers.set(
"Content-Security-Policy",
contentSecurityPolicyHeaderValue
);
return i18nRouter(request, i18nConfig);
}
export const config = {
matcher: ["/((?!api|_next/static|_next/image|images|icons|favicon.ico).*)"],
};
`
"request" is readable only and if I update headers in i18Response variable, then routing does not works. I tried numerous way but its an issue here.
### Expected Behavior
Need to be able to update the headers while using 18nRouter.
### To Reproduce
Just need to update the request headers somehow in middleware.js file while using i18nRouter. | examples | low | Critical |
2,522,647,340 | vscode | Funtionality to Add Breakpoints for Searched Word | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
# Feature Description
I would like to request a feature in VSCode that allows users to add breakpoints to all occurrences of a searched word or phrase. Currently, users have to manually navigate through the code and add breakpoints one by one. This feature could streamline the debugging process, especially when needing to set breakpoints on several instances of the same variable, function, or keyword across a file or project.
# Proposed Functionality
#### Search for a word/phrase: Users can use the existing search functionality to find the occurrences of a word or phrase in the current file or throughout the project.
#### Add breakpoints: After performing the search, an option to "Add Breakpoints" should be available. This would:
#### Add breakpoints at every instance of the searched word/phrase within the current file or the entire project.
##### Optional: Search scope: The feature could ask the user whether they want to add breakpoints only for occurrences in the current file or throughout the project.
# Benefits
#### Saves time by automating the process of adding breakpoints to multiple instances of the same keyword.
#### Enhances the debugging experience, especially in larger projects where a variable or function might be used across multiple files.
#### Reduces the risk of human error when manually adding breakpoints.
## Example Use Case
#### When debugging an application, a developer may want to set breakpoints on every instance of a function call or variable to inspect its usage. Instead of navigating manually through the code, they can search for the function/variable and automatically set breakpoints for all occurrences with this feature.
## Conclusion
This feature would significantly improve the developer experience by simplifying the process of adding breakpoints across files and projects, making debugging in VSCode more efficient.
Thank you for considering this feature request! | feature-request,search | low | Critical |
2,522,649,403 | flutter | Command+Enter on Chrome causes Enter Key to be considered held down | ### Steps to reproduce
- Start the supplied code sample in Chrome
- Press CTRL/CMD-Enter.
- Press Shift-K
### Expected results
The configured action for the Shift-K shortcut should run and the text should update to "Shortcut Triggered!"
### Actual results
Nothing happens.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart'
show AppBar, MaterialApp, Scaffold, Theme;
import 'package:flutter/services.dart' show LogicalKeyboardKey;
import 'package:flutter/widgets.dart'
show
BuildContext,
CallbackAction,
Center,
FocusableActionDetector,
FractionallySizedBox,
Intent,
LogicalKeySet,
State,
StatefulWidget,
StatelessWidget,
Text,
Widget,
runApp;
void main() {
runApp(const MyApp());
}
class ShortCutIntent extends Intent {
const ShortCutIntent();
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return const MaterialApp(
title: 'Flutter Focus Bug',
home: MyHomePage(title: 'Flutter Focus Bug'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
var _text = "Press Shift + K to trigger shortcut";
_rootAction(_) {
setState(() {
_text = "Shortcut Triggered!";
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: FocusableActionDetector(
autofocus: true,
shortcuts: <LogicalKeySet, Intent>{
LogicalKeySet(LogicalKeyboardKey.shift, LogicalKeyboardKey.keyK):
const ShortCutIntent(),
},
actions: {
ShortCutIntent: CallbackAction(onInvoke: _rootAction),
},
child: Center(
child: FractionallySizedBox(
widthFactor: .8,
child: Center(
child:
Text(_text, style: Theme.of(context).textTheme.displaySmall),
),
),
),
),
);
}
}
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.2, on macOS 14.6.1 23G93 darwin-arm64, locale en-US)
• Flutter version 3.24.2 on channel stable at /Users/matt/fvm/versions/3.24.2
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4cf269e36d (9 days ago), 2024-09-03 14:30:00 -0700
• Engine revision a6bd3f1de1
• Dart version 3.5.2
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0)
• Android SDK at /Users/matt/Library/Android/sdk
• Platform android-34, build-tools 33.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.93.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.121
! Error: Browsing on the local area network for Christina Choriatis’s iPhone (2). Ensure the device is unlocked and attached with a cable or associated with the same local area
network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| a: text input,platform-web,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.26 | low | Critical |
2,522,708,871 | react-native | In IOS TextInput does not update style if copy and pasted string is bigger then maxLength | ### Description
In IOS the TextInput component does not update properly the style when copy and pasted string is bigger then `maxLength`, issue does not happen on Android. In the minimal reproducible example, there is a simple TextInput that should change text color to red, if text length is bigger or equal then 100 characters. If we copy and paste (using paste with long press) a string longer then 100, the color does not update to red. If user removes and adds last character manually, we can see color changing correctly to red.
### Steps to reproduce
1. Run `yarn && npx pod-install && yarn ios`
2. Copy a string bigger then 100 characters
3. Paste (using long press) the string
4. Might be necessary to repeat steps 2 and 3 one more time
4. Noticed color is still green and not red
### React Native Version
0.75.2
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU: (12) arm64 Apple M2 Pro
Memory: 91.03 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.12.1
path: ~/.nvm/versions/node/v18.12.1/bin/node
Yarn:
version: 3.6.4
path: ~/.nvm/versions/node/v18.12.1/bin/yarn
npm:
version: 8.19.2
path: ~/.nvm/versions/node/v18.12.1/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.15989.150.2411.11948838
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /Users/aureobeck/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.2
wanted: 0.75.2
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
No logs
```
### Reproducer
https://github.com/aureosouza/TextInputDemo
### Screenshots and Videos

| Platform: iOS,Issue: Author Provided Repro,Component: TextInput,Newer Patch Available | low | Major |
2,522,725,043 | godot | [4.4 dev2] Texture values are out of sync with the parameter in the visual shader | ### Tested versions
4.4 dev2
### System information
Godot v4.4.dev2 - Windows 10.0.19045 - OpenGL 3 (Compatibility) - Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 Threads)
### Issue description
I noticed that the values in the visual shader that are changed in the preview field are not synchronized with the nodes (for example Texture2D)

As you can see in the screenshot above, the Texture2D node has the Godot logo texture, but the right side has a different texture in the parameters.
### Steps to reproduce
1. Create a shader using the visual editor
2. Drag the texture
3. Replace this texture in the parameters on the right (where the preview is)
4. The texture has not changed in the Texture2D node
### Minimal reproduction project (MRP)
N/A | topic:editor,topic:shaders | low | Minor |
2,522,774,557 | flutter | Acquire devices with a Qualcomm chipset for the devicelab | Following removal of the Moto G4 devices from the lab, Flutter's lab does not currently have coverage of Qualcomm chipsets.
A trial of Samsung S21+ devices is unsuccessful so far as the devices arrived with either warped cases or puffed up batteries. | team-infra,P2,triaged-infra | low | Minor |
2,522,797,570 | flutter | [Android] Determine if Android Performance Hint Manager is useful. | The android performance hint manager class (https://developer.android.com/ndk/reference/group/a-performance-hint ) allows us to configure various properties of our expected per-frame workloads to help the OS more accurlatly allocate resources (not sure what this means, assuming fast/slow core and/or throttling).
The way this API works is that we can, for each thread, create a sesssion and at the beginning set the target work duration - and at the end set the actual work duration.
in psuedocode:
```
var session = createSession(this_thread);
session.setTargetWorkDuration(Duration(..));
// Do Work
session.setActualWorkDuration(..).
```
However, we do not have accurate estimates of the target work duration because this work depends on completely arbitrary workloads created by SDK users. As a result we'd only ever be able to provide the actual work duration. But the OS already knows the actual work duration. | P3,team-engine,triaged-engine | low | Major |
2,522,817,414 | ollama | Attribute about model's tool use capability in model_info | In the current 'model_info' I'm missing a attribute that tells me that the model is capable of handling tool calls. One may check the template data for "$.Tools", which I find rather ugly. Therefore, I propose to add a an attribute like, e.g.:
```
general.supports_tool_calls: true
```
or similar attribute. If you plan to have different types of tool calls, or want to plan ahead for checking call compatibility you may want to add an attribute like:
```
general.tool_format: "1.0"
```
br,
Peter | feature request | low | Minor |
2,522,828,461 | deno | Support for miniflare on windows | Miniflare is a simulator for developing and testing [Cloudflare Workers](https://workers.cloudflare.com/), powered by [workerd](https://github.com/cloudflare/workerd).
This library is used by many frameworks as Cloudflare adapter.
e.g.) SvelteKit, Astro, HonoX
Currently, the example written in [README](https://www.npmjs.com/package/miniflare#quick-start) does not work either.
```sh
PS miniflare> deno --version
deno 2.0.0-rc.2+0a4a8c7 (canary, release, x86_64-pc-windows-msvc)
v8 12.9.202.13-rusty
typescript 5.5.2
miniflare> me mini.ts
import { Miniflare } from "npm:miniflare";
// Create a new Miniflare instance, starting a workerd server
const mf = new Miniflare({
script: `addEventListener("fetch", (event) => {
event.respondWith(new Response("Hello Miniflare!"));
})`,
});
// Send a request to the workerd server, the host is ignored
const response = await mf.dispatchFetch("http://localhost:8787/");
console.log(await response.text()); // Hello Miniflare!
// Cleanup Miniflare, shutting down the workerd server
await mf.dispose();
PS miniflare> deno -A mini.ts
error: Uncaught (in promise) TypeError: undefined is not iterable (cannot read property Symbol(Symbol.iterator))
at Function.from (<anonymous>)
at Object.<anonymous> (file:///C:/Users/xxxx/AppData/Local/deno/npm/registry.npmjs.org/miniflare/3.20240821.1/dist/src/index.js:6744:64)
at Object.<anonymous> (file:///C:/Users/xxxx/AppData/Local/deno/npm/registry.npmjs.org/miniflare/3.20240821.1/dist/src/index.js:10197:4)
at Module._compile (node:module:735:34)
at Object.Module._extensions..js (node:module:756:11)
at Module.load (node:module:655:32)
at Function.Module._load (node:module:523:13)
at Module.require (node:module:674:19)
at require (node:module:800:16)
at file:///C:/Users/xxxx/AppData/Local/deno/npm/registry.npmjs.org/miniflare/3.20240821.1/dist/src/index.js:3:13
```
Related: https://github.com/denoland/deno/issues/25513 | bug,windows | low | Critical |
2,522,869,659 | flutter | [Impeller] quantize render target sizes. | For non-image filtered saveLayers, we should quantize the allocated render target size (always rounding up) to some step value N (256?) to improve the efficiency of the render target cache.
Consider an application with an animated expanding clip + opacity around a drawPaint. Today we will allocate a new render target on each frame that the clip changes, because we will tightly size the render target to the clip bounds. If we quantized to some value, then we would only need to re-allocate once that size difference passed some threshold.
FYI @flar | P2,team-engine,triaged-engine | low | Minor |
2,522,898,667 | vscode | Piece tree's `nodeAt` can return `null` | Came across this while debugging a tree sitter issue where `nodeAt` is returning null.
https://github.com/microsoft/vscode/blob/e6e9919a3ba8e9c781b0c22d7a6c209a3ffecd5a/src/vs/editor/common/model/pieceTreeTextBuffer/pieceTreeBase.ts#L1527-L1528 | debt,editor-textbuffer | low | Critical |
2,522,918,425 | angular | Warning regarding "disabled" property of [formControl] family of directives refers to behavior thats specific to a subset of ControlValueAccessors, but shows up for all of them | ### Which @angular/* package(s) are the source of the bug?
forms
### Is this a regression?
No
### Description
I have a custom component implementing ControlValueAccessor that wraps around native `<input type="radio" />` so that I can provide more custom design, which is a common task. Radio buttons of a group work with same formControl instance with the way things are implemented in my case. The inner `<input type="radio" />` does NOT use RadioControlValueAccessor.
I let my custom component have 'disabled' Input and I do document that when this input is disabled=true then that takes precedence over the status of the form control (that it gets provided through [formControl], [formControlName], etc...), otherwise form control status is the one in effect. This works just fine for selectively disabling single radio buttons, yet, I get the warning from https://github.com/angular/angular/blob/55e3a3a47cddefebc28af03ce58ece136d41a391/packages/forms/src/directives/reactive_directives/form_control_directive.ts#L94 of
> It looks like you're using the disabled attribute with a reactive form directive. If you set disabled to true
when you set up this control in your component class, the disabled attribute will actually be set in the DOM for
you. We recommend using this approach to avoid 'changed after checked' errors.
But thats not true, because that effect may not happen for ControlValueAccessors that are implemented outside Angular, unlike eg.: RadioControlValueAccessor, see
https://github.com/angular/angular/blob/55e3a3a47cddefebc28af03ce58ece136d41a391/packages/forms/src/directives/radio_control_value_accessor.ts#L244.
But thats not the accessor I am using anywhere in my code, so this warning still shows up, even though its not related to my usecase, where the behavior is well documented.
So now I have to:
1) Be stuck with the warning, even though its false positive
2) Use a different name other than "disabled"
Could the warning be moved away from [formControl] directives, to the subclasses of BuiltInControlValueAccessor that actually do the disabled prop writing behavior instead?
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-x9t7mb?file=src%2Fmain.ts
Very simple empty implementation of ControlValueAccessor with a disabled input. Note the warning in console which is not actually relevant.
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
```true
@angular/forms: 17.0.9
Node v22.1.0
Npm 10.7.0
```
### Anything else?
_No response_ | area: forms | low | Critical |
2,522,924,669 | kubernetes | Allow operating on errors instead of logging them | At the moment, there is no option to avoid logging certain errors, which also prevents dependent repositories from operating on these errors in a way that would make more sense to the user-facing side of things. IIUC, it makes sense to use [`HandleError`](https://github.com/kubernetes/apimachinery/blob/8d1258da8f386b809d312cdda316366d5612f54e/pkg/util/runtime/runtime.go#L98) for all non-user-facing facets, but not so much in cases where this ends up adding to the confusion of the user since [the caller had no way to deal](https://github.com/kubernetes/apimachinery/blob/v0.27.1/pkg/util/runtime/runtime.go#L113-L116) with these errors.
***
A bit of context; at the moment, I'm using reflectors to update stores which are dynamic in nature, in the sense that these can be operated on to add or drop certain resources. This however, leads to `reflector.go` [logging](https://github.com/kubernetes/client-go/blob/bbdc95deee6fdda42bba28a631130978a67163bd/tools/cache/reflector.go#L148) whenever a previously-found resource goes "missing". Is there a possible workaround to this, or maybe something that I'm completely failing to see? | sig/api-machinery,lifecycle/stale,triage/needs-information,triage/accepted | medium | Critical |
2,522,931,833 | PowerToys | Always On Top: Show a border inside the window border | ### Description of the new feature / enhancement
Instead of showing the border around the pinned window, show the border inside the confines of the window. I use multiple monitors, and if a pinned window is maximized, the shown border is either pushed offscreen, or shown on the neighboring monitor.
### Scenario when this would be used?
On multiple monitors, if the pinned window is maximized, the border is shown outside the window, causing the border to be disassociated from the window itself, and displayed on a different monitor. Or, on a single monitor, the border is pushed offscreen, or only shown along the Windows taskbar, to whichever edge the taskbar is set.
### Supporting information
Browsing the code, it appears to be fairly simple to switch the behavior of the border, or slightly more effort to add another configuration option to allow the user to choose to have the border shown inside vs. outside the window frame.
Mainly in GetFrameRect, here:
https://github.com/microsoft/PowerToys/blob/7640258c10e266b10e4d2e71c0120bc0a2a9882a/src/modules/alwaysontop/AlwaysOnTop/WindowBorder.cpp#L18 | Needs-Triage | low | Minor |
2,522,964,587 | PowerToys | Power Toys Quick Accent Stops the GPU | ### Microsoft PowerToys version
0.84.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Quick Accent
### Steps to reproduce
When I open a game to play. The Nvidia Control Panel alerts me that the computer is still using the integrated graphics or "optimus" mode rather than switching to the dedicated graphics. The Nvidia control panel tells me to stop Quick Accent as it is preventing the computer from using the dedicated graphics. After closing or "exiting" the power toys program through the system tray icon, can the computer switch to using the dedicated graphics mode.
### ✔️ Expected Behavior
I expect being able to play a game without having to exit PowerToys through the system tray.
### ❌ Actual Behavior
I have to exit PowerToys through the system tray before I play a game. This is bothersome because I sometimes play in the morning before heading to class where I need the Quick Accent and PowerToys Run features.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,522,971,705 | PowerToys | Cant enable OneNote Addon in PowerToys Run | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Enable the OneNote add-on
### ✔️ Expected Behavior
Could Searches your local OneNote notebooks. This plugin requires the OneNote desktop app which is included in Microsoft Office
### ❌ Actual Behavior
It shows an error

### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,522,995,930 | vscode | Multifile diff editor leaks models and editor after closing and hiding | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0 (Universal)
- OS Version: OS: Darwin arm64 23.6.0
Steps to Reproduce:
1. Make `Refactor Preview` appear (e.g. open some file and try some of `Refactor...` context actions)
2. Close the preview editor via `Discard` button
3. Call `vscode.window.visibleTextEditors`
4. It contains the `vscode-bulkeditpreview-editor://...` editor that corresponds to the `Refactor Preview` we just closed **as well as the file** that refactor preview was referencing
Repro and demo:
https://github.com/user-attachments/assets/4418fe28-9a5f-499a-baae-44219805d642
| bug,freeze-slow-crash-leak,ux,workspace-edit | low | Critical |
2,523,009,549 | flutter | Mac builders are failing (timing out) on `find_sdk` | [Example failure](https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8737042303369595825/+/u/gn_--target-dir_ci_host_debug_framework_--runtime-mode_debug_--no-lto_--prebuilt-dart-sdk_--build-embedder-examples_--use-glfw-swiftshader_--no-rbe_--no-goma/stdout):
```txt
Using prebuilt Dart SDK binary. If you are editing Dart sources and wish to compile the Dart SDK, set `--no-prebuilt-dart-sdk`.
Generating GN files in: out/ci/host_debug_framework
ERROR at //build/config/mac/mac_sdk.gni:41:7: Script returned non-zero exit code.
exec_script("//build/mac/find_sdk.py", find_sdk_args, "list lines")
^----------
Current dir: /Volumes/Work/s/w/ir/cache/builder/src/out/ci/host_debug_framework/
Command: vpython3 /Volumes/Work/s/w/ir/cache/builder/src/build/mac/find_sdk.py --print_sdk_path 10.14
Returned 1.
stderr:
...
subprocess.TimeoutExpired: Command '['xcodebuild', '-showsdks', '-json']' timed out after 300 seconds
```
This is not a 1-off:
- https://ci.chromium.org/ui/p/flutter/builders/prod/Mac%20Production%20Engine%20Drone/453851/overview
- https://ci.chromium.org/ui/p/flutter/builders/prod/Mac%20Production%20Engine%20Drone/453292/overview
- https://ci.chromium.org/ui/p/flutter/builders/prod/Mac%20Production%20Engine%20Drone/447147/overview
- https://ci.chromium.org/ui/p/flutter/builders/prod/Mac%20Production%20Engine%20Drone/443529/overview | engine,team-infra,P1,triaged-infra | medium | Critical |
2,523,050,583 | deno | deno serve --watch-hmr does not emit hmr event | Version: Deno 2.0.0-rc.2
The "hmr" event does not seem to get emitted when the `--watch-hmr` flag is used in combination with `deno serve`. In combination with `deno run` is works as expected.
## Example
Running the code below with `deno serve --watch-hmr main.ts` and making a change will log:
```ts
// HMR File change detected! Restarting!
// deno serve: Listening on http://0.0.0.0:8000/
addEventListener("hmr", (e) => console.log((e as CustomEvent).detail));
export default {
fetch(_request: Request) {
return new Response("Hi Mom!");
},
} satisfies Deno.ServeDefaultExport;
```
Running the more or less equivalent code below with `deno run --watch-hmr --allow-net main.ts` will log:
```ts
// HMR File change detected! Restarting!
// Listening on http://0.0.0.0:8000/
// { path: "file:///home/jtrns/tmp/deno-hmr-serve/main.ts" }
// HMR Replaced changed module file:///home/jtrns/tmp/deno-hmr-serve/main.ts
addEventListener("hmr", (e) => console.log((e as CustomEvent).detail));
Deno.serve((_request) => {
return new Response("Hi Mom!");
});
```
| bug,--watch,serve | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.