id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,665,055,234 | neovim | Segfault when new window is created inside WinLeave event callback | ### Problem
If new window is created in WinLeave event callback when closing current tab page, neovim segfaults with the following stack trace:
```
Program received signal SIGSEGV, Segmentation fault.
0x0000559beb72f584 in frame2win (frp=0x0) at /home/apollo1321/neovim/src/nvim/window.c:3474
3474 while (frp->fr_win == NULL) {
(gdb) bt
#0 0x0000559beb72f584 in frame2win (frp=0x0) at /home/apollo1321/neovim/src/nvim/window.c:3474
#1 0x0000559beb72ec33 in winframe_find_altwin (win=0x559becc7d780, dirp=0x7ffe293b5048, tp=0x559becb5c7b0, altfr=0x7ffe293b4f98) at /home/apollo1321/neovim/src/nvim/window.c:3211
#2 0x0000559beb72e9f6 in winframe_remove (win=0x559becc7d780, dirp=0x7ffe293b5048, tp=0x559becb5c7b0, unflat_altfr=0x0) at /home/apollo1321/neovim/src/nvim/window.c:3148
#3 0x0000559beb72e94d in win_free_mem (win=0x559becc7d780, dirp=0x7ffe293b5048, tp=0x559becb5c7b0) at /home/apollo1321/neovim/src/nvim/window.c:3072
#4 0x0000559beb72e8ac in win_close_othertab (win=0x559becc7d780, free_buf=0, tp=0x559becb5c7b0) at /home/apollo1321/neovim/src/nvim/window.c:3050
#5 0x0000559beb72d9ec in close_last_window_tabpage (win=0x559becc7d780, free_buf=false, prev_curtab=0x559becb5c7b0) at /home/apollo1321/neovim/src/nvim/window.c:2609
#6 0x0000559beb72dd9b in win_close (win=0x559becc7d780, free_buf=false, force=false) at /home/apollo1321/neovim/src/nvim/window.c:2711
#7 0x0000559beb4d1a03 in ex_quit (eap=0x7ffe293b5280) at /home/apollo1321/neovim/src/nvim/ex_docmd.c:4776
#8 0x0000559beb4c9970 in execute_cmd0 (retv=0x7ffe293b5238, eap=0x7ffe293b5280, errormsg=0x7ffe293b5250, preview=false) at /home/apollo1321/neovim/src/nvim/ex_docmd.c:1714
#9 0x0000559beb4cb829 in do_one_cmd (cmdlinep=0x7ffe293b54a0, flags=0, cstack=0x7ffe293b55b0, fgetline=0x559beb4e47c7 <getexline>, cookie=0x0) at /home/apollo1321/neovim/src/nvim/ex_docmd.c:2358
#10 0x0000559beb4c730b in do_cmdline (cmdline=0x0, fgetline=0x559beb4e47c7 <getexline>, cookie=0x0, flags=0) at /home/apollo1321/neovim/src/nvim/ex_docmd.c:667
#11 0x0000559beb5ce4ab in nv_colon (cap=0x7ffe293b5c20) at /home/apollo1321/neovim/src/nvim/normal.c:3191
#12 0x0000559beb5c9f0e in normal_execute (state=0x7ffe293b5bb0, key=58) at /home/apollo1321/neovim/src/nvim/normal.c:1243
#13 0x0000559beb6a98c5 in state_enter (s=0x7ffe293b5bb0) at /home/apollo1321/neovim/src/nvim/state.c:102
#14 0x0000559beb5c806d in normal_enter (cmdwin=false, noexmode=false) at /home/apollo1321/neovim/src/nvim/normal.c:521
#15 0x0000559beb56b798 in main (argc=2, argv=0x7ffe293b5fb8) at /home/apollo1321/neovim/src/nvim/main.c:660
```
Related to https://github.com/nvim-treesitter/nvim-treesitter-context/issues/522
### Steps to reproduce
```
cat > init.lua << EOF
vim.api.nvim_create_autocmd({ "WinLeave" }, {
callback = function()
local buf = vim.api.nvim_create_buf(false, true)
vim.api.nvim_open_win(buf, false, {
win = 0,
relative = "win",
row = 10,
col = 2,
width = 12,
height = 3,
border = "single",
})
end
})
EOF
nvim --clean -c "source init.lua"
:tab split
:q
```
### Expected behavior
No segfault
### Nvim version (nvim -v)
v0.11.0-dev-29ded8895
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
Linux 6.8.0-48-generic
### Terminal name/version
iterm 2, 3.5.3
### $TERM environment variable
xterm-256color
### Installation
build from repo | has:backtrace,bug-crash,floatwin,events | low | Critical |
2,665,055,979 | vscode | Request: Git reset from the CLI should not remove files from the "Go to File..." dialog's "recently opened" section | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I consider this to be a minor UX improvement.
Steps to Reproduce:
1. Create a new VSCode project in a git repo
1. Create a bunch of files with similar names. Open them all so they show up in the "Go to File..." dialog's "recently opened" section
1. Commit
1. Create a new directory in VSCode
1. Drag a file from step 2 to the new directory
1. Open a terminal to your project root
1. Run `git add .; git reset --hard`
1. Focus a different VS Code tab
1. Search for the original file with the the "Go to File..." dialog (ctrl+p)
### Expected:
The original file appears in the "recently opened" section
### Actual:
The file will not be in the "recently opened" section
### Workaround:
You can still find the file in VS Code explorer. If you open it from there, the file will appear in the "Go to File..." dialog's "recently opened" section
## Note:
1. Restarting VS Code does not make a difference
5. If you change step 8 to click on the deleted tab, the deleted file (i.e., the file with in invalid path) will appear in the "Go to File..." dialog thereafter.
| feature-request,workbench-history | low | Minor |
2,665,060,251 | terminal | Windows Server 2025: Windows Terminal: Crash in OpenConsole.exe (0xc0000094, division by zero) causes programs to fail with 0xC0000142. How to detect and work around this Windows Terminal version? | ### Windows Terminal version
1.18.2401
### Windows build number
10.0.26100.2314
### Other Software
Bitvise SSH Server 9.39 installer
Bitvise SSH Client 9.39 installer
### Steps to reproduce
This problem is frequently observed by users of Bitvise software who attempt to install on Windows Server 2025 before the Windows Terminal has updated to a more recent version. The problem has also been reported with Windows 11.
To reproduce:
Issue version 1:
- Ensure the installed Windows Terminal version is the one that ships with the OS (e.g. version 1.18 for Windows Server 2025)
- Download Bitvise SSH Server installer: https://bitvise.com/ssh-server-download
- Open Command Prompt in Windows Terminal with administrative rights
- Run "BvSshServer-Inst.exe -?" for command-line help
- Observe that the Windows Terminal window closes abruptly
- Observe that events from "Application Error" and "Windows Error Reporting" appear in the Windows Event Log
- The error events show OpenConsole.exe crashing with exception code 0xc0000094 (integer division by zero)
Issue version 2:
- Run Bitvise SSH Server installer from Windows File Explorer
- The installer is a console application with graphical UI. Windows Terminal window opens in background for the console output
- Attempt to install the SSH Server
- Installation fails when the installer tries to run BvSshServer.exe to register the Windows service
- Error code from BvSshServer.exe is 0xC0000142, corresponding to DLL initialization issue
- Observe that events from "Application Error" and "Windows Error Reporting" appear in the Windows Event Log
- The error events show OpenConsole.exe crashing with exception code 0xc0000094 (integer division by zero)
Both problems appear to resolve themselves after Windows Terminal auto-updates to a newer version. This auto-update appears to happen unpredictably in the background, without notice to the user that something has changed.
After Windows Terminal auto-updates to 1.21.2911, the installers work - the OpenConsole crash no longer occurs.
Since Windows Server 2025 comes bundled with Windows Terminal 1.18, and the Windows Terminal version frequently does NOT update before the user attempts to install Bitvise software:
- We need a way to detect the application is running under Windows Terminal
- We need a way to detect the version of Windows Terminal, so that issues in the version that ships with the OS can be avoided
### Minimal Reproducible Example
We have identified the cause of the crash in Windows Terminal 1.18 and wrote a trivial C++ program which reproduces it reliably:
[TestConsoleBuf.txt](https://github.com/user-attachments/files/17792233/TestConsoleBuf.txt)
Conditions for the crash are set up if a program calls `SetConsoleScreenBufferSize` to enlarge the screen buffer height. The crash occurs when another program is run in the same terminal window.
This is fixed in newer Windows Terminal versions, but Windows Terminal 1.18 ships with Windows Server 2025. It is not necessarily updated before the user tries to run software in it.
It would be useful to have a documented mechanism to:
(A) detect that the program runs under Windows Terminal, and
(B) detect the Windows Terminal version.
In the absence of such a feature, our current workaround is to detect that screen buffer height == window height, and avoid enlarging the screen buffer height in this circumstance. We assume that problems might arise if the heights are equal and become unequal. | Issue-Bug,Product-Terminal,Needs-Tag-Fix | low | Critical |
2,665,101,944 | material-ui | Extending the outer theme with a nested ThemeProvider throws an error if css variables are active | ### Steps to reproduce
Steps:
1. Open this link to live example: (required) https://codesandbox.io/p/sandbox/exciting-rhodes-zk9kwk
2. Enable css variables in the outer ThemeProvider
3. Nest a ThemeProvider within the outer ThemeProvider
4. Attempt to extend the outer theme following the guidance in [Theming](https://mui.com/material-ui/customization/theming/#nesting-the-theme) (the codesandbox is adapted straight from that example)
5. When `createTheme` is called with the outer theme spread into it, an error is thrown. This seems to be because the outer theme has a `vars` field, which is not allowed as an input to `createTheme`.
### Current behavior
The following error is thrown:
```
MUI: `vars` is a private field used for CSS variables support.
Please use another name.
```
### Expected behavior
An error should not be thrown and the outer theme should be able to be extended even if CSS variables are enabled.
### Context
We're trying to extend the outer theme with additional styles, as described in the docs.
I'm happy to raise a PR to address this issue, once a fix has been agreed.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: Extending theme | docs,customization: theme | low | Critical |
2,665,120,939 | godot | GLTF Root Scale does not apply to Collision Shapes generated from Meshes | ### Tested versions
- Reproducible in: 4.4.dev4.mono [36e6207bb]
### System information
Godot v4.4.dev4.mono - macOS 15.2.0 - Multi-window, 1 monitor - Metal (Forward+) - integrated Apple M1 (Apple7) - Apple M1 (8 threads)
### Issue description
Using the `Create Collision Shape...` button
<img width="235" alt="Screenshot 2024-11-17 at 01 17 20" src="https://github.com/user-attachments/assets/5a9db141-ba6d-420e-bea7-9793df8d495b">
On a mesh imported from GLTF with a modified root scale, the collision shape will be generated at the original scale of the GLTF.
### Steps to reproduce
1. Import a GLTF and modify the root scale
2. Generate a collision shape from a selected mesh in the GLTF
3. Collision shape will be wrong size
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/17788321/mrp.zip)
| bug,topic:import,topic:3d | low | Minor |
2,665,122,809 | deno | Deno v2 throws error on simple Mysql2 snippet | Deno Version: Deno 2.0.1.
Trying to run the following snippet from the Mysql2 [documentation](https://sidorares.github.io/node-mysql2/docs):
```typescript
// Get the client
import mysql from "mysql2/promise";
// Create the connection to database
const connection = await mysql.createConnection({
host: "localhost",
user: "me"
});
// A simple SELECT query
try {
const [results, fields] = await connection.query("SELECT 1 + 1");
console.log(results); // results contains rows returned by server
console.log(fields); // fields contains extra meta data about results, if available
} catch (err) {
console.log(err);
}
```
Throws the following:
```text
error: Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'setNoDelay')
at TCP.setNoDelay (ext:deno_node/internal_binding/tcp_wrap.ts:229:28)
at Socket.setNoDelay (node:net:595:20)
at new Connection (file:///Users/andres/Library/Caches/deno/npm/registry.npmjs.org/mysql2/3.11.4/lib/connection.js:64:21)
at Object.exports.createConnection (file:///Users/andres/Library/Caches/deno/npm/registry.npmjs.org/mysql2/3.11.4/index.js:10:10)
at Object.createConnection (file:///Users/andres/Library/Caches/deno/npm/registry.npmjs.org/mysql2/3.11.4/promise.js:252:31)
at file:///Users/andres/.../test/mysql2.ts:7:32
```
| needs info | low | Critical |
2,665,130,883 | godot | Sub-window graphics issues with Forward+ on Raspberry Pi | ### Tested versions
- Reproducible in 4.3-stable
### System information
Godot v4.3.stable - Debian GNU/Linux 12 (bookworm) 12 - Wayland - Vulkan (Forward+) - integrated V3D 7.1.7 - ARM Cortex-A76 (4 Threads)
### Issue description
Whenever I have a 3d scene open (even a blank one), the menus and sub-windows have a graphical glitch. I've attached a couple images.




### Steps to reproduce
Absolutely no specific project is necessary. Just start a 3d scene and open a menu or try to add a node.
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd,topic:porting | low | Minor |
2,665,159,981 | godot | The material of a CSGShape3D as child of a CSGCombiner3D does not obey local triplanar coordinates | ### Tested versions
- Reproducible in: v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - macOS 14.7.1 - Vulkan (Forward+) - integrated Apple M1 Pro - Apple M1 Pro (10 Threads)
### Issue description
When assigning a material (with local triplanar mapping enabled) to a `CSGShape3D` that is a child of a `CSGCombiner3D`, then the material does not stay in place when the shape is being moved relative to the combiner. It only does so if the combiner itself is moved. This issue does not arise if no `CSGCombiner3D` is present.
### Steps to reproduce
Create a new 3D scene with a `CSGCombiner3D` and a `CSGShape3D` (e.g. a `CSGBox3D`) as its child. Assign a `StandardMaterial3D` to the `CSGShape3D` with a placeholder texture and (local) triplanar coordinates enabled. Move the shape around and observe that the texture does not stay in place. Compare to moving around the combiner itself, or the same shape but not as child of a `CSGCombiner3D`.
### Minimal reproduction project (MRP)
[triplanar-csgcombiner-mrp.zip](https://github.com/user-attachments/files/17788632/triplanar-csgcombiner-mrp.zip)
| bug,topic:3d | low | Minor |
2,665,160,985 | pytorch | inconsistency in ```torch.special.xlogy``` on CPU and GPU | ### 🐛 Describe the bug
inconsistent results of ```torch.special.xlogy``` function on CPU and GPU
```python #
import torch
self = torch.tensor([[-0.6914]], dtype=torch.bfloat16)
other = torch.tensor([[3070.0]], dtype=torch.bfloat16)
result_cpu = torch.special.xlogy(self, other)
self_cuda = self.cuda()
other_cuda = other.cuda()
result_gpu = torch.special.xlogy(self_cuda, other_cuda)
print("CPU result:\n", result_cpu)
print("GPU result:\n", result_gpu)
inconsistent = not torch.allclose(result_cpu, result_gpu.cpu(), atol=1e-02, rtol=1e-03)
print("Inconsistency with atol=1e-02 and rtol=1e-03:", inconsistent)
```
outputs:
```
CPU result:
tensor([[-5.5312]], dtype=torch.bfloat16)
GPU result:
tensor([[-5.5625]], device='cuda:0', dtype=torch.bfloat16)
Inconsistency with atol=1e-02 and rtol=1e-03: True
```
### Versions
(executed on google colab)
PyTorch version: 2.5.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.3.3
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvtx==0.2.10
[pip3] optree==0.13.0
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.0+cu121
[pip3] torchaudio==2.5.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0+cu121
[conda] Could not collect | module: numerical-stability,triaged,module: bfloat16 | low | Critical |
2,665,164,695 | pytorch | Build fails in py311 with torch xpu ops | ### 🐛 Describe the bug
Building wheel torch-2.6.0a0+gitb86b534
-- Building version 2.6.0a0+gitb86b534
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=E:\pytorch\torch -DCMAKE_PREFIX_PATH=E:\ProgramData\anaconda3\envs\py311\Lib\site-packages;D:\Intel\oneAPI\tbb\latest\env\..;D:\Intel\oneAPI\pti\latest\env\..\lib\cmake\pti;D:\Intel\oneAPI\ipp\latest\lib\cmake\ipp;D:\Intel\oneAPI\dpl\latest\lib\cmake\oneDPL;D:\Intel\oneAPI\dnnl\latest\env\..\lib\cmake;D:\Intel\oneAPI\dal\latest;D:\Intel\oneAPI\compiler\latest; -DPython_EXECUTABLE=E:\ProgramData\anaconda3\envs\py311\python.exe -DTORCH_BUILD_VERSION=2.6.0a0+gitb86b534 -DUSE_CUDA=0 -DUSE_INTEL_LLVM=0 -DUSE_NUMPY=True E:\pytorch
-- The CXX compiler identification is MSVC 19.42.34433.0
-- The C compiler identification is MSVC 19.42.34433.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
CMake Warning at CMakeLists.txt:421 (message):
TensorPipe cannot be used on Windows. Set it to OFF
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Success
-- Performing Test C_HAS_AVX512_1
-- Performing Test C_HAS_AVX512_1 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Success
-- Performing Test CXX_HAS_AVX512_1
-- Performing Test CXX_HAS_AVX512_1 - Success
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Compiler does not support SVE extension. Will not build perfkernels.
-- Performing Test HAS/UTF_8
-- Performing Test HAS/UTF_8 - Success
Intel(R) oneAPI DPC++/C++ Compiler for applications running on Intel(R) 64, Version 2024.1.4 Build 20240802
Copyright (C) 1985-2024 Intel Corporation. All rights reserved.
CMake Warning (dev) at E:/ProgramData/anaconda3/envs/py311/Library/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message):
The package name passed to `find_package_handle_standard_args` (SYCL) does
not match the name of the calling package (SYCLToolkit). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindSYCLToolkit.cmake:125 (find_package_handle_standard_args)
cmake/public/xpu.cmake:12 (find_package)
cmake/Dependencies.cmake:93 (include)
CMakeLists.txt:859 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found SYCL: D:/Intel/oneAPI/compiler/latest/include;D:/Intel/oneAPI/compiler/latest/include/sycl (found version "20240104")
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
--
-- 3.13.0.0
-- Looking for pthread.h
-- Looking for pthread.h - not found
-- Found Threads: TRUE
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Looking for cblas_sgemm
-- Looking for cblas_sgemm - found
-- Looking for cblas_gemm_bf16bf16f32
-- Looking for cblas_gemm_bf16bf16f32 - found
-- Looking for cblas_gemm_f16f16f32
-- Looking for cblas_gemm_f16f16f32 - found
-- MKL libraries: D:/Intel/oneAPI/mkl/latest/lib/mkl_intel_lp64_dll.lib;D:/Intel/oneAPI/mkl/latest/lib/mkl_intel_thread_dll.lib;D:/Intel/oneAPI/mkl/latest/lib/mkl_core_dll.lib;D:/Intel/oneAPI/compiler/latest/lib/libiomp5md.lib
-- MKL include directory: D:/Intel/oneAPI/mkl/latest/include
-- MKL OpenMP type: Intel
-- MKL OpenMP library: D:/Intel/oneAPI/compiler/latest/lib/libiomp5md.lib
-- The ASM compiler identification is MSVC
-- Found assembler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
-- Using ccache: E:/ProgramData/anaconda3/envs/py311/bin/ccache.exe
-- Building for XNNPACK_TARGET_PROCESSOR: x86_64
-- Found Python: E:\ProgramData\anaconda3\envs\py311\python.exe (found version "3.11.10") found components: Interpreter
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.41.0.windows.3")
-- git version: v1.6.1 normalized to 1.6.1
-- Version: 1.6.1
-- Looking for shm_open in rt
-- Looking for shm_open in rt - not found
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK -- success
-- Found PythonInterp: E:/ProgramData/anaconda3/envs/py311/python.exe (found version "3.11.10")
-- Performing Test COMPILER_SUPPORTS_AVX512
-- Performing Test COMPILER_SUPPORTS_AVX512 - Success
-- Check OMP with lib D:/Intel/oneAPI/compiler/latest/lib/libiomp5md.lib and flags -openmp:experimental
-- Check OMP with lib D:/Intel/oneAPI/compiler/latest/lib/libiomp5md.lib and flags -openmp:experimental
CMake Warning (dev) at E:/ProgramData/anaconda3/envs/py311/Library/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:136 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -openmp:experimental
CMake Warning (dev) at E:/ProgramData/anaconda3/envs/py311/Library/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:136 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_CXX: -openmp:experimental
-- Found OpenMP: TRUE
CMake Warning at third_party/fbgemm/CMakeLists.txt:138 (message):
OpenMP found! OpenMP_C_INCLUDE_DIRS =
CMake Warning at third_party/fbgemm/CMakeLists.txt:232 (message):
==========
CMake Warning at third_party/fbgemm/CMakeLists.txt:233 (message):
CMAKE_BUILD_TYPE = Release
CMake Warning at third_party/fbgemm/CMakeLists.txt:234 (message):
CMAKE_CXX_FLAGS_DEBUG is /Z7 /Ob0 /Od /RTC1 /bigobj
CMake Warning at third_party/fbgemm/CMakeLists.txt:235 (message):
CMAKE_CXX_FLAGS_RELEASE is /O2 /Ob2 /DNDEBUG /bigobj
CMake Warning at third_party/fbgemm/CMakeLists.txt:236 (message):
==========
** AsmJit Summary **
ASMJIT_DIR=E:/pytorch/third_party/fbgemm/third_party/asmjit
ASMJIT_TEST=FALSE
ASMJIT_TARGET_TYPE=SHARED
ASMJIT_DEPS=
ASMJIT_LIBS=asmjit
ASMJIT_CFLAGS=
ASMJIT_PRIVATE_CFLAGS=-MP;-GF;-Zc:__cplusplus;-Zc:inline;-Zc:strictStrings;-Zc:threadSafeInit-;-W4
ASMJIT_PRIVATE_CFLAGS_DBG=-GS
ASMJIT_PRIVATE_CFLAGS_REL=-GS-;-O2;-Oi
-- Using third party subdirectory Eigen.
-- Found Python: E:\ProgramData\anaconda3\envs\py311\python.exe (found version "3.11.10") found components: Interpreter Development.Module NumPy
-- Using third_party/pybind11.
-- pybind11 include dirs: E:/pytorch/cmake/../third_party/pybind11/include
-- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS)
-- Using third_party/opentelemetry-cpp.
-- opentelemetry api include dirs: E:/pytorch/cmake/../third_party/opentelemetry-cpp/api/include
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
Reason given by package: MPI component 'Fortran' was requested, but language Fortran is not enabled.
CMake Warning at cmake/Dependencies.cmake:928 (message):
Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF
Call Stack (most recent call first):
CMakeLists.txt:859 (include)
-- Adding OpenMP CXX_FLAGS: -openmp:experimental
-- Will link against OpenMP libraries: D:/Intel/oneAPI/compiler/latest/lib/libiomp5md.lib
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'BUILD_BENCHMARK'.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:35 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'USE_NCCL'.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:36 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'USE_RCCL'.
This warning is for project developers. Use -Wno-dev to suppress it.
-- MSVC detected
-- Set USE_REDIS OFF
-- Set USE_IBVERBS OFF
-- Set USE_NCCL OFF
-- Set USE_RCCL OFF
-- Set USE_LIBUV ON
-- Only USE_LIBUV is supported on Windows
-- Gloo build as SHARED library
Generated: E:/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Generated: E:/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Generated: E:/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
--
-- ******** Summary ********
-- CMake version : 3.18.2
-- CMake command : E:/ProgramData/anaconda3/envs/py311/Library/bin/cmake.exe
-- System : Windows
-- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
-- C++ compiler version : 19.42.34433.0
-- CXX flags : /DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;__STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH : E:\ProgramData\anaconda3\envs\py311\Lib\site-packages;D:\Intel\oneAPI\tbb\latest\env\..;D:\Intel\oneAPI\pti\latest\env\..\lib\cmake\pti;D:\Intel\oneAPI\ipp\latest\lib\cmake\ipp;D:\Intel\oneAPI\dpl\latest\lib\cmake\oneDPL;D:\Intel\oneAPI\dnnl\latest\env\..\lib\cmake;D:\Intel\oneAPI\dal\latest;D:\Intel\oneAPI\compiler\latest;
-- CMAKE_INSTALL_PREFIX : E:/pytorch/torch
-- CMAKE_MODULE_PATH : E:/pytorch/cmake/Modules
--
-- ONNX version : 1.17.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_DISABLE_STATIC_REGISTRATION : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_SHARED_LIBS :
-- BUILD_SHARED_LIBS : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
-- Adding -DNDEBUG to compile flags
CMake Warning at cmake/Dependencies.cmake:1414 (message):
Not compiling with MAGMA. Suppress this warning with -DUSE_MAGMA=OFF.
Call Stack (most recent call first):
CMakeLists.txt:859 (include)
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for sbgemm_
-- Looking for sbgemm_ - not found
-- Found a library with LAPACK API (mkl).
disabling CUDA because NOT USE_CUDA is set
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- Will build oneDNN UKERNEL
-- MKLDNN_CPU_RUNTIME = OMP
-- DNNL_TARGET_ARCH: X64
-- DNNL_LIBRARY_NAME: dnnl
CMake Warning (dev) at E:/ProgramData/anaconda3/envs/py311/Library/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:55 (find_package)
third_party/ideep/mkl-dnn/CMakeLists.txt:119 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -openmp:experimental
CMake Warning (dev) at E:/ProgramData/anaconda3/envs/py311/Library/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:55 (find_package)
third_party/ideep/mkl-dnn/CMakeLists.txt:119 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_CXX: -openmp:experimental
-- Enabled testing coverage: CI
-- Enabled workload: TRAINING
-- Enabled primitives: ALL
-- Enabled primitive CPU ISA: ALL
-- Enabled primitive GPU ISA: ALL
-- Enabled GeMM kernels ISA: ALL
-- Primitive cache is enabled
-- Experimental functionality for ukernels is enabled
-- The ASM_MASM compiler identification is MSVC
-- Found assembler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/ml64.exe
-- Graph component is enabled
-- Graph compiler backend is disabled.
-- Found MKL-DNN: TRUE
-- {fmt} version: 11.0.2
-- Build type: Release
-- Using CPU-only version of Kineto
-- Configuring Kineto dependency:
-- KINETO_SOURCE_DIR = E:/pytorch/third_party/kineto/libkineto
-- KINETO_BUILD_TESTS = OFF
-- KINETO_LIBRARY_TYPE = static
INFO CUDA_SOURCE_DIR =
INFO ROCM_SOURCE_DIR =
INFO CUPTI unavailable or disabled - not building GPU profilers
-- Kineto: FMT_SOURCE_DIR = E:/pytorch/third_party/fmt
-- Kineto: FMT_INCLUDE_DIR = E:/pytorch/third_party/fmt/include
INFO CUPTI_INCLUDE_DIR = /extras/CUPTI/include
INFO ROCTRACER_INCLUDE_DIR = /include/roctracer
INFO DYNOLOG_INCLUDE_DIR = E:/pytorch/third_party/kineto/libkineto/third_party/dynolog/
INFO IPCFABRIC_INCLUDE_DIR = E:/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/
-- Configured Kineto (CPU)
-- Performing Test HAS/WD4624
-- Performing Test HAS/WD4624 - Success
-- Performing Test HAS/WD4068
-- Performing Test HAS/WD4068 - Success
-- Performing Test HAS/WD4067
-- Performing Test HAS/WD4067 - Success
-- Performing Test HAS/WD4267
-- Performing Test HAS/WD4267 - Success
-- Performing Test HAS/WD4661
-- Performing Test HAS/WD4661 - Success
-- Performing Test HAS/WD4717
-- Performing Test HAS/WD4717 - Success
-- Performing Test HAS/WD4244
-- Performing Test HAS/WD4244 - Success
-- Performing Test HAS/WD4804
-- Performing Test HAS/WD4804 - Success
-- Performing Test HAS/WD4273
-- Performing Test HAS/WD4273 - Success
-- Performing Test HAS_WNO_STRINGOP_OVERFLOW
-- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed
--
-- Use the C++ compiler to compile (MI_USE_CXX=ON)
--
-- Library base name: mimalloc
-- Version : 1.8
-- Build type : release
-- C++ Compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
-- Compiler flags : /Zc:__cplusplus
-- Compiler defines :
-- Link libraries : psapi;shell32;user32;advapi32;bcrypt
-- Build targets : static
--
-- Performing Test HAS_WDEPRECATED
-- Performing Test HAS_WDEPRECATED - Failed
-- don't use NUMA
-- Looking for backtrace
-- Looking for backtrace - not found
-- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR)
-- headers outputs:
-- sources outputs:
-- declarations_yaml outputs:
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed
-- Using ATen parallel backend: OMP
disabling CUDA because USE_CUDA is set false
-- Found OpenSSL: E:/ProgramData/anaconda3/envs/py311/Library/lib/libcrypto.lib (found version "3.4.0")
-- Check size of long double
-- Check size of long double - done
-- Performing Test COMPILER_SUPPORTS_FLOAT128
-- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed
-- Performing Test COMPILER_SUPPORTS_SSE2
-- Performing Test COMPILER_SUPPORTS_SSE2 - Success
-- Performing Test COMPILER_SUPPORTS_SSE4
-- Performing Test COMPILER_SUPPORTS_SSE4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX
-- Performing Test COMPILER_SUPPORTS_AVX - Success
-- Performing Test COMPILER_SUPPORTS_FMA4
-- Performing Test COMPILER_SUPPORTS_FMA4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX2
-- Performing Test COMPILER_SUPPORTS_AVX2 - Success
-- Performing Test COMPILER_SUPPORTS_AVX512F
-- Performing Test COMPILER_SUPPORTS_AVX512F - Success
-- Found OpenMP_C: -openmp:experimental (found version "2.0")
-- Found OpenMP_CXX: -openmp:experimental (found version "2.0")
-- Found OpenMP: TRUE (found version "2.0")
-- Performing Test COMPILER_SUPPORTS_OPENMP
-- Performing Test COMPILER_SUPPORTS_OPENMP - Success
-- Performing Test COMPILER_SUPPORTS_OMP_SIMD
-- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed
-- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM
-- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed
-- Configuring build for SLEEF-v3.6.0
Target system: Windows-10.0.27744
Target processor: AMD64
Host system: Windows-10.0.27744
Host processor: AMD64
Detected C compiler: MSVC @ C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
CMake: 3.18.2
Make program: E:/ProgramData/anaconda3/envs/py311/Library/bin/ninja.exe
-- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef
-- Building shared libs : OFF
-- Building static test bins: OFF
-- MPFR : LIB_MPFR-NOTFOUND
-- GMP : LIBGMP-NOTFOUND
-- RT :
-- FFTW3 : LIBFFTW3-NOTFOUND
-- OPENSSL : 3.4.0
-- SDE : SDE_COMMAND-NOTFOUND
-- COMPILER_SUPPORTS_OPENMP : FALSE
AT_INSTALL_INCLUDE_DIR include/ATen/core
core header install: E:/pytorch/build/aten/src/ATen/core/TensorBody.h
core header install: E:/pytorch/build/aten/src/ATen/core/aten_interned_strings.h
core header install: E:/pytorch/build/aten/src/ATen/core/enum_tag.h
Intel(R) oneAPI DPC++/C++ Compiler for applications running on Intel(R) 64, Version 2024.1.4 Build 20240802
Copyright (C) 1985-2024 Intel Corporation. All rights reserved.
-- Found SYCL: D:/Intel/oneAPI/compiler/latest/include;D:/Intel/oneAPI/compiler/latest/include/sycl;D:/Intel/oneAPI/compiler/latest/include/sycl (found version "20240104")
1.0.029803
-- Compile Intel GPU AOT Targets for ats-m150,lnl-m,mtl-u,mtl-h
CMake Error at third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:389 (file):
file does not recognize sub-command REAL_PATH
Call Stack (most recent call first):
third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:511 (SYCL_LINK_DEVICE_OBJECTS)
third_party/torch-xpu-ops/test/sycl/CMakeLists.txt:6 (sycl_add_executable)
CMake Error at third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:389 (file):
file does not recognize sub-command REAL_PATH
Call Stack (most recent call first):
third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:452 (SYCL_LINK_DEVICE_OBJECTS)
third_party/torch-xpu-ops/test/sycl/CMakeLists.txt:16 (sycl_add_library)
为 E:\pytorch\third_party\torch-xpu-ops\yaml\templates <<===>> E:\pytorch\aten\src\ATen\templates 创建的符号链接
CMake Error at third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:389 (file):
file does not recognize sub-command REAL_PATH
Call Stack (most recent call first):
third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:452 (SYCL_LINK_DEVICE_OBJECTS)
third_party/torch-xpu-ops/src/BuildOnWindows.cmake:84 (sycl_add_library)
third_party/torch-xpu-ops/src/CMakeLists.txt:19 (include)
CMake Error at third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:389 (file):
file does not recognize sub-command REAL_PATH
Call Stack (most recent call first):
third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:452 (SYCL_LINK_DEVICE_OBJECTS)
third_party/torch-xpu-ops/src/BuildOnWindows.cmake:98 (sycl_add_library)
third_party/torch-xpu-ops/src/CMakeLists.txt:19 (include)
CMake Error at third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:389 (file):
file does not recognize sub-command REAL_PATH
Call Stack (most recent call first):
third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:452 (SYCL_LINK_DEVICE_OBJECTS)
third_party/torch-xpu-ops/src/BuildOnWindows.cmake:112 (sycl_add_library)
third_party/torch-xpu-ops/src/CMakeLists.txt:19 (include)
CMake Error at third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:389 (file):
file does not recognize sub-command REAL_PATH
Call Stack (most recent call first):
third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:452 (SYCL_LINK_DEVICE_OBJECTS)
third_party/torch-xpu-ops/src/BuildOnWindows.cmake:126 (sycl_add_library)
third_party/torch-xpu-ops/src/CMakeLists.txt:19 (include)
CMake Error at third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:389 (file):
file does not recognize sub-command REAL_PATH
Call Stack (most recent call first):
third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:452 (SYCL_LINK_DEVICE_OBJECTS)
third_party/torch-xpu-ops/src/BuildOnWindows.cmake:140 (sycl_add_library)
third_party/torch-xpu-ops/src/CMakeLists.txt:19 (include)
CMake Error at third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:389 (file):
file does not recognize sub-command REAL_PATH
Call Stack (most recent call first):
third_party/torch-xpu-ops/cmake/Modules/FindSYCL.cmake:452 (SYCL_LINK_DEVICE_OBJECTS)
third_party/torch-xpu-ops/src/BuildOnWindows.cmake:154 (sycl_add_library)
third_party/torch-xpu-ops/src/CMakeLists.txt:19 (include)
-- Performing Test HAS_WNO_DEPRECATED_COPY
-- Performing Test HAS_WNO_DEPRECATED_COPY - Failed
Please install clang-format-12 before contributing to torch-xpu-ops!
-- Performing Test HAS_WNO_UNUSED_PRIVATE_FIELD
-- Performing Test HAS_WNO_UNUSED_PRIVATE_FIELD - Failed
-- Generating sources for unboxing kernels E:\ProgramData\anaconda3\envs\py311\python.exe;-m;torchgen.gen_executorch;--source-path=E:/pytorch/test/edge/../../test/edge;--install-dir=E:/pytorch/build/out;--tags-path=E:/pytorch/test/edge/../../aten/src/ATen/native/tags.yaml;--aten-yaml-path=E:/pytorch/test/edge/../../aten/src/ATen/native/native_functions.yaml;--use-aten-lib;--op-selection-yaml-path=E:/pytorch/test/edge/../../test/edge/selected_operators.yaml;--custom-ops-yaml-path=E:/pytorch/test/edge/../../test/edge/custom_ops.yaml
CMake Warning at CMakeLists.txt:1280 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.
--
-- ******** Summary ********
-- General:
-- CMake version : 3.18.2
-- CMake command : E:/ProgramData/anaconda3/envs/py311/Library/bin/cmake.exe
-- System : Windows
-- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
-- C++ compiler id : MSVC
-- C++ compiler version : 19.42.34433.0
-- Using ccache if found : OFF
-- CXX flags : /DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_X86_SIMD_SORT -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -DUSE_XPU
-- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099
-- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099
-- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;IDEEP_USE_MKL;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;FLASHATTENTION_DISABLE_ALIBI;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC
-- CMAKE_PREFIX_PATH : E:\ProgramData\anaconda3\envs\py311\Lib\site-packages;D:\Intel\oneAPI\tbb\latest\env\..;D:\Intel\oneAPI\pti\latest\env\..\lib\cmake\pti;D:\Intel\oneAPI\ipp\latest\lib\cmake\ipp;D:\Intel\oneAPI\dpl\latest\lib\cmake\oneDPL;D:\Intel\oneAPI\dnnl\latest\env\..\lib\cmake;D:\Intel\oneAPI\dal\latest;D:\Intel\oneAPI\compiler\latest;
-- CMAKE_INSTALL_PREFIX : E:/pytorch/torch
-- USE_GOLD_LINKER : OFF
--
-- TORCH_VERSION : 2.6.0
-- BUILD_STATIC_RUNTIME_BENCHMARK: OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_PYTHON : True
-- Python version : 3.11.10
-- Python executable : E:\ProgramData\anaconda3\envs\py311\python.exe
-- Python library : E:/ProgramData/anaconda3/envs/py311/libs/python311.lib
-- Python includes : E:/ProgramData/anaconda3/envs/py311/include
-- Python site-package : E:\ProgramData\anaconda3\envs\py311\Lib\site-packages
-- BUILD_SHARED_LIBS : ON
-- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF
-- BUILD_TEST : True
-- BUILD_JNI : OFF
-- BUILD_MOBILE_AUTOGRAD : OFF
-- BUILD_LITE_INTERPRETER: OFF
-- INTERN_BUILD_MOBILE :
-- TRACING_BASED : OFF
-- USE_BLAS : 1
-- BLAS : mkl
-- BLAS_HAS_SBGEMM :
-- USE_LAPACK : 1
-- LAPACK : mkl
-- USE_ASAN : OFF
-- USE_TSAN : OFF
-- USE_CPP_CODE_COVERAGE : OFF
-- USE_CUDA : 0
-- USE_XPU : ON
-- SYCL compiler version : 20240104
-- SYCL include path : D:/Intel/oneAPI/compiler/latest/include;D:/Intel/oneAPI/compiler/latest/include/sycl
-- SYCL library : D:/Intel/oneAPI/compiler/latest/lib/sycl7.lib
-- USE_ROCM : OFF
-- BUILD_NVFUSER :
-- USE_EIGEN_FOR_BLAS :
-- USE_X86_SIMD_SORT : ON
-- USE_FBGEMM : ON
-- USE_FAKELOWP : OFF
-- USE_KINETO : ON
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LITE_PROTO : OFF
-- USE_PYTORCH_METAL : OFF
-- USE_PYTORCH_METAL_EXPORT : OFF
-- USE_MPS : OFF
-- CAN_COMPILE_METAL :
-- USE_MKL : ON
-- USE_STATIC_MKL : OFF
-- USE_MKLDNN : ON
-- USE_MKLDNN_ACL : OFF
-- USE_MKLDNN_CBLAS : OFF
-- USE_UCC : OFF
-- USE_ITT : ON
-- USE_NCCL : OFF
-- USE_NNPACK : OFF
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENMP : ON
-- USE_MIMALLOC : ON
-- USE_MIMALLOC_ON_MKL : OFF
-- USE_VULKAN : OFF
-- USE_PROF : OFF
-- USE_PYTORCH_QNNPACK : OFF
-- USE_XNNPACK : ON
-- USE_DISTRIBUTED : ON
-- USE_MPI : OFF
-- USE_GLOO : ON
-- USE_GLOO_WITH_OPENSSL : OFF
-- USE_TENSORPIPE : OFF
-- Public Dependencies : caffe2::mkl
-- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;fbgemm;ittnotify;fp16;caffe2::openmp;gloo;fmt::fmt-header-only;kineto
-- Public CUDA Deps. :
-- Private CUDA Deps. :
-- USE_COREML_DELEGATE : OFF
-- BUILD_LAZY_TS_BACKEND : ON
-- USE_ROCM_KERNEL_ASSERT : OFF
-- Performing Test HAS_WMISSING_PROTOTYPES
-- Performing Test HAS_WMISSING_PROTOTYPES - Failed
-- Performing Test HAS_WERROR_MISSING_PROTOTYPES
-- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed
-- Configuring incomplete, errors occurred!
See also "E:/pytorch/build/CMakeFiles/CMakeOutput.log".
See also "E:/pytorch/build/CMakeFiles/CMakeError.log".
### Versions
torch-2.6.0a0+gitb86b534
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @EikanWang @fengyuan14 @guangyey | module: build,module: windows,triaged,module: xpu | low | Critical |
2,665,198,535 | rust | Tracking Issue for String::into_chars | Feature gate: `#![feature(string_into_chars)]`
This is a tracking issue for:
* `String::into_chars`
### Public API
```rust
impl String {
pub fn into_chars(self) -> IntoChars {
IntoChars { bytes: self.into_bytes().into_iter() }
}
}
impl Iterator for IntoChars {
type Item = char;
}
```
### Steps / History
- [x] ACP: https://github.com/rust-lang/libs-team/issues/268
- [ ] Implementation: https://github.com/rust-lang/rust/pull/133057
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,665,213,021 | neovim | `<Cmd>` keys not handled properly when used with `--remote-send` | ### Problem
`<Cmd>` keys are not handled properly when used with `--remote-send`. Keys wrapped in between `<Cmd>` and `<CR>` can be interpreted as if they are typed in normal mode instead of part of a command.
For example, if an nvim instance is in the middle of a `gi` keymap, after pressing `g`, if we send it `<Cmd>if v:true | echo 'msg from remote' | endif<CR>`, it will insert `f v:true | echo 'msg from remote' | endif` in the current buffer.
### Steps to reproduce
1. `nvim --clean` to enter nvim
2. `:echo v:servername` to find the listen address
3. Press `g` to put nvim in the middle of the `gi` keymap
4. From another terminal, run `nvim --clean --headless --server <servername> --remote-send "<Cmd>if v:true | echo 'msg from remote' | endif<CR>" +'qa!'`
5. String "f v:true | echo 'msg from remote' | endif" is inserted in the buffer
### Expected behavior
nvim should execute the command instead of entering insert mode on the first 'i' and insert the rest of the command in the buffer.
### Nvim version (nvim -v)
v0.10.2
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
Linux 6.11.6
### Terminal name/version
alacritty 0.14.0 + tmux next-3.6
### $TERM environment variable
tmux-256color
### Installation
AUR | bug,input,event-loop,remote | low | Minor |
2,665,227,905 | node | --experimental-test-module-mocks is not working as expected | ### Version
v24.0.0-pre
### Platform
```text
Linux fedora 6.11.6-200.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 1 16:09:34 UTC 2024 x86_64 GNU/Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
This issue was observed inside the nodejs source code, when tying to create a benchmark
create a file `benchmark/test_runner/mock-module.js`
now copy this code
```js
"use strict";
const { test } = require("node:test");
function main() {
test(async (t) => {
console.log("benchmark");
try {
// Create a mock module
t.mock.module('axios', {
namedExports: {
get: (url) => url
}
});
} catch (e) {
console.error(e);
}
console.log("end");
});
}
main()
```
now run this using `./node --experimental-test-module-mocks benchmark/test_runner/mock-module.js`
You will get this output
```
benchmark
Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'axios' imported from /home/ankan/Documents/git/me/node/benchmark/test_runner/mock-module.js
at Object.getPackageJSONURL (node:internal/modules/package_json_reader:267:9)
at packageResolve (node:internal/modules/esm/resolve:768:81)
at moduleResolve (node:internal/modules/esm/resolve:854:18)
at defaultResolve (node:internal/modules/esm/resolve:984:11)
at nextResolve (node:internal/modules/esm/hooks:748:28)
at resolve (node:internal/test_runner/mock/loader:78:35)
at nextResolve (node:internal/modules/esm/hooks:748:28)
at Hooks.resolve (node:internal/modules/esm/hooks:240:30)
at handleMessage (node:internal/modules/esm/worker:199:24)
at Immediate.checkForMessages (node:internal/modules/esm/worker:141:28) {
code: 'ERR_MODULE_NOT_FOUND'
}
end
✔ <anonymous> (42.524333ms)
(node:91728) ExperimentalWarning: Module mocking is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
ℹ tests 1
ℹ suites 0
ℹ pass 1
ℹ fail 0
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms 48.749785
```
As you can we we are trying to create a mock 'axios' module but we receive the error `Cannot find package 'axios'`, which is **unexpected as we are not even importing** the module. The error is occurring because of the line `t.mock.module`
### How often does it reproduce? Is there a required condition?
dose not need any special condition
### What is the expected behavior? Why is that the expected behavior?
should create a mock module called `axios`
### What do you see instead?
their is an error, saying package 'axios' not found, which is very unexpected
### Additional information
_No response_ | test_runner | low | Critical |
2,665,284,614 | pytorch | DCP fails to save optimizer state if zero_grad() has not been called | ### 🐛 Describe the bug
The loaded state dict is silently empty if the parameter still has grads. (Testcase modified from some dcp async_save tutorial.)
```
import os
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
import torch.multiprocessing as mp
import torch.nn as nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict
from torch.distributed.checkpoint.stateful import Stateful
from torch.distributed.fsdp.fully_sharded_data_parallel import StateDictType
from torch.distributed.checkpoint import StorageWriter
from torch.distributed.checkpoint.state_dict import (
get_model_state_dict,
get_optimizer_state_dict,
set_model_state_dict,
set_optimizer_state_dict,
StateDictOptions
)
CHECKPOINT_DIR = "checkpoint"
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net2 = nn.Linear(1, 1, bias=False)
def forward(self, x):
return self.net2(x)
class ModelWrapper(Stateful):
def __init__(self, model: nn.Module) -> None:
self.model = model
def state_dict(self) -> None:
return get_model_state_dict(self.model)
def load_state_dict(self, state_dict) -> None:
set_model_state_dict(self.model, state_dict)
class OptimizerWrapper(Stateful):
def __init__(self, model: nn.Module, optim: torch.optim.Optimizer) -> None:
self.model = model
self.optim = optim
def state_dict(self) -> None:
return get_optimizer_state_dict(self.model, self.optim, options=StateDictOptions(flatten_optimizer_state_dict=True))
def load_state_dict(self, state_dict) -> None:
set_optimizer_state_dict(self.model, self.optim, optim_state_dict=state_dict, options=StateDictOptions(flatten_optimizer_state_dict=True))
if __name__ == "__main__":
rank = 0
world_size = 1
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355 "
dist.init_process_group("cuda:nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
model = ToyModel().to(rank)
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.1)
loss = model(torch.rand(1, device="cuda")).sum()
loss.backward()
optimizer.step()
lr_scheduler.step()
# optimizer.zero_grad() # uncomment me to fix DCP!
state_dict = {"model": ModelWrapper(model), "optimizer": OptimizerWrapper(model, optimizer)}
dcp.save(state_dict, checkpoint_id=f"{CHECKPOINT_DIR}")
with torch.no_grad():
print("optimizer", optimizer.state_dict())
optimizer = torch.optim.Adam(model.parameters(), lr=20)
model.net2.weight.zero_()
print("optimizer is cleared:", optimizer.state_dict())
print("model is cleared:", model.state_dict())
state_dict = {"model": ModelWrapper(model), "optimizer": OptimizerWrapper(model, optimizer)}
dcp.load(state_dict, checkpoint_id=f"{CHECKPOINT_DIR}")
print("optimizer after load:", optimizer.state_dict())
print("model after load:", model.net2.weight)
dist.destroy_process_group()
```
Result:
```
optimizer {'state': {0: {'step': tensor(1.), 'exp_avg': tensor([[0.0388]], device='cuda:0'), 'exp_avg_sq': tensor([[0.0002]], device='cuda:0')}}, 'param_groups': [{'lr': 0.010000000000000002, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None, 'initial_lr': 0.1, 'params': [0]}]}
optimizer is cleared: {'state': {}, 'param_groups': [{'lr': 20, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None, 'params': [0]}]}
model is cleared: OrderedDict([('net2.weight', tensor([[0.]], device='cuda:0'))])
optimizer after load: {'state': {0: {}}, 'param_groups': [{'lr': 0.010000000000000002, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None, 'params': [0]}]}
model after load: Parameter containing:
tensor([[-0.3503]], device='cuda:0', requires_grad=True)
```
Interpretation: the first optimizer print shows that 'exp_avg' is filled. The second optimizer print shows that creating a new optimizer has cleared it. The third optimizer print shows that 'state' is still missing 'exp_avg'.
If the line `# optimizer.zero_grad() # uncomment me to fix DCP!` is uncommented, this problem disappears, and the loaded optimizer recovers its 'exp_avg':
```
optimizer after load: {'state': {0: {'step': tensor(1.), 'exp_avg': tensor([[0.0834]], device='cuda:0'), 'exp_avg_sq': tensor([[0.0007]], device='cuda:0')}}, 'param_groups': [{'lr': 0.010000000000000002, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None, 'params': [0]}]}
```
### Versions
```
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.13
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchdiffeq==0.2.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,665,286,115 | pytorch | DCP fails to save "initial_lr" field, created by an LR scheduler | ### 🐛 Describe the bug
Initial LR is present in the optimizer's state dict on saving, but it disappears on loading. This prevents resuming an LR scheduler; we need to use hacks around `last_epoch=-1` or manually patching `initial_lr` to make one work.
Testcase:
```python
import os
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
import torch.multiprocessing as mp
import torch.nn as nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict
from torch.distributed.checkpoint.stateful import Stateful
from torch.distributed.fsdp.fully_sharded_data_parallel import StateDictType
from torch.distributed.checkpoint import StorageWriter
from torch.distributed.checkpoint.state_dict import (
get_model_state_dict,
get_optimizer_state_dict,
set_model_state_dict,
set_optimizer_state_dict,
StateDictOptions
)
CHECKPOINT_DIR = "checkpoint"
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net2 = nn.Linear(1, 1, bias=False)
def forward(self, x):
return self.net2(x)
class ModelWrapper(Stateful):
def __init__(self, model: nn.Module) -> None:
self.model = model
def state_dict(self) -> None:
return get_model_state_dict(self.model)
def load_state_dict(self, state_dict) -> None:
set_model_state_dict(self.model, state_dict)
class OptimizerWrapper(Stateful):
def __init__(self, model: nn.Module, optim: torch.optim.Optimizer) -> None:
self.model = model
self.optim = optim
def state_dict(self) -> None:
return get_optimizer_state_dict(self.model, self.optim, options=StateDictOptions(flatten_optimizer_state_dict=True))
def load_state_dict(self, state_dict) -> None:
set_optimizer_state_dict(self.model, self.optim, optim_state_dict=state_dict, options=StateDictOptions(flatten_optimizer_state_dict=True))
if __name__ == "__main__":
rank = 0
world_size = 1
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355 "
dist.init_process_group("cuda:nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
model = ToyModel().to(rank)
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.1)
loss = model(torch.rand(1, device="cuda")).sum()
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad() # uncomment me to fix DCP!
state_dict = {"model": ModelWrapper(model), "optimizer": OptimizerWrapper(model, optimizer)}
dcp.save(state_dict, checkpoint_id=f"{CHECKPOINT_DIR}")
with torch.no_grad():
print("optimizer", optimizer.state_dict())
optimizer = torch.optim.Adam(model.parameters(), lr=20)
model.net2.weight.zero_()
print("optimizer is cleared:", optimizer.state_dict())
print("model is cleared:", model.state_dict())
state_dict = {"model": ModelWrapper(model), "optimizer": OptimizerWrapper(model, optimizer)}
dcp.load(state_dict, checkpoint_id=f"{CHECKPOINT_DIR}")
print("optimizer after load:", optimizer.state_dict())
print("model after load:", model.net2.weight)
dist.destroy_process_group()
```
(same code as issue #140898)
Output:
```
optimizer {'state': {0: {'step': tensor(1.), 'exp_avg': tensor([[0.0834]], device='cuda:0'), 'exp_avg_sq': tensor([[0.0007]], device='cuda:0')}}, 'param_groups': [{'lr': 0.010000000000000002, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None, 'initial_lr': 0.1, 'params': [0]}]}
...
optimizer after load: {'state': {0: {'step': tensor(1.), 'exp_avg': tensor([[0.0834]], device='cuda:0'), 'exp_avg_sq': tensor([[0.0007]], device='cuda:0')}}, 'param_groups': [{'lr': 0.010000000000000002, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None, 'params': [0]}]}
```
Notice that the initial_lr field has disappeared.
In my recollection, any other field added to the `param_groups` section will likewise disappear after saving and loading.
### Versions
```
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.13
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchdiffeq==0.2.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,665,329,483 | vscode | Some references.view shortcuts not working | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Fedora 41
Steps to Reproduce:
1. Open the `Keyboard Shortcuts` preferences window.
2. Attempt to bind `references-view.removeReferenceItem` (Dismiss) to a keybinding
3. Right click some field or other code construct, click "Find All References."
4. Press `references-view.removeReferenceItem` keybind.
5. The keybinding does not work.
Expected behavior:
I'd expect this keybinding to execute "Dismiss" on the currently selected reference item, similar to when you right click -> Dismiss in the reference view.
Other info:
* I have tested various keybindings, ensured no clashing keybindings, restarted vscode, and messed with the "when" conditionals quite a bit.
* I cannot get `references-view.removeReferencesItem`, `references-view.removeCallItem` or `references-view.removeTypeItem` to do anything regardless of my keybinding or editor state.
* I can't get `references-view.refind` to work either.
* `references-view.next`, `references-view.previous`, and `references-view.findReferences` work fine (and are very handy!)
* I installed latest VS Code in a Windows 11 VM and tested there with the exact same results.
Am I doing something wrong? I'm real interested in those removeReferences bindings but I just can't get them to do anything for the life of me. | feature-request,references-viewlet | low | Critical |
2,665,330,637 | vscode | My Snippents in LaTeX Files doesn't work? | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.88.0
- OS Version: Windows 11 professional version-22H2-22621.4317
Steps to Reproduce:
I have use the `Help: Start Extension Bisect` and after I disabled the extension, the problem still exist:
the code can't detect the `¥¥` and trigger the snippets as the link shows:
https://github.com/James-Yu/LaTeX-Workshop/issues/4455
Here below is my `settings.json`, maybe a little long for other configure
<details><summary>hidden code block `settings.json`</summary>
```
{
"latex-workshop.latex.autoBuild.run": "never",
"latex-workshop.showContextMenu": true,
"latex-workshop.intellisense.package.enabled": true,
"latex-workshop.message.error.show": false,
"latex-workshop.message.warning.show": false,
"latex-workshop.latex.tools": [
{
"name": "xelatex",
"command": "xelatex",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-file-line-error",
"%DOCFILE%"
]
},
{
"name": "xelatex-shell-escape",
"command": "xelatex",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-file-line-error",
"-shell-escape",
"%DOCFILE%"
]
},
{
"name": "pdflatex",
"command": "pdflatex",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-file-line-error",
"%DOCFILE%"
]
},
{
"name": "pdflatex-shell-escape",
"command": "pdflatex",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-file-line-error",
"-shell-escape",
"%DOCFILE%"
]
},
{
"name": "lualatex",
"command": "lualatex",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-file-line-error",
"%DOCFILE%"
]
},
{
"name": "bibtex",
"command": "bibtex",
"args": [
"%DOCFILE%"
]
},
{
"name": "biber",
"command": "biber",
"args": [
"%DOCFILE%"
]
},
{
"name": "latexmkpdf",
"command": "latexmk",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-halt-on-error",
"-file-line-error",
"-pdflatex",
"%DOCFILE%"
]
},
{
"name": "latexmkxe",
"command": "latexmk",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-halt-on-error",
"-file-line-error",
"-xelatex",
"%DOCFILE%"
]
},
{
"name": "latexmklua",
"command": "latexmk",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-halt-on-error",
"-file-line-error",
"-lualatex",
"%DOCFILE%"
]
},
{
"name": "latexmkxe-shell-escape",
"command": "latexmk",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-halt-on-error",
"-file-line-error",
"-xelatex",
"--shell-escape",
"%DOCFILE%"
]
},
{
"name": "latexmk-SmallClean",
"command": "latexmk",
"args": [
"-c"
]
},
{
"name": "latexmk-FullClean",
"command": "latexmk",
"args": [
"-C"
]
},
],
"latex-workshop.latex.recipes": [
{
"name": "latexmkxe",
"tools": [
"latexmkxe"
]
},
{
"name": "latexmkxe-shell-escape",
"tools": [
"latexmkxe-shell-escape"
]
},
{
"name": "latexmklua",
"tools": [
"latexmklua"
]
},
{
"name": "latexmkpdf",
"tools": [
"latexmkpdf"
]
},
{
"name": "latexmk-clean",
"tools": [
"latexmk-SmallClean"
]
},
{
"name": "latexmk-Clean",
"tools": [
"latexmk-FullClean"
]
},
{
"name": "XeLaTeX",
"tools": [
"xelatex",
]
},
{
"name": "XeLaTeX*2",
"tools": [
"xelatex",
"xelatex",
]
},
{
"name": "LuaLaTeX",
"tools": [
"lualatex",
]
},
{
"name": "XeLaTeX-shell-escape",
"tools": [
"xelatex-shell-escape",
]
},
{
"name": "PDFLaTeX",
"tools": [
"pdflatex",
]
},
{
"name": "PDFLaTeX-shell-escape",
"tools": [
"pdflatex-shell-escape",
]
},
{
"name": "BibTeX",
"tools": [
"bibtex",
]
},
{
"name": "Biber",
"tools": [
"biber",
]
},
{
"name": "XeLaTeX -> BibTeX -> XeLaTeX*2",
"tools": [
"xelatex",
"bibtex",
"xelatex",
"xelatex",
]
},
{
"name": "XeLaTeX -> Biber -> XeLaTeX*2",
"tools": [
"xelatex",
"biber",
"xelatex",
"xelatex",
]
},
{
"name": "XeLaTeX-shell-escape -> BibTeX -> XeLaTeX*2",
"tools": [
"xelatex-shell-escape",
"bibtex",
"xelatex-shell-escape",
"xelatex-shell-escape",
]
},
{
"name": "XeLaTeX-shell-escape -> Biber -> XeLaTeX*2",
"tools": [
"xelatex-shell-escape",
"biber",
"xelatex-shell-escape",
"xelatex-shell-escape",
]
},
{
"name": "XeLaTeX-shell-escape -> BibTeX -> XeLaTeX*2",
"tools": [
"xelatex-shell-escape",
"bibtex",
"xelatex-shell-escape",
"xelatex-shell-escape",
]
},
],
"latex-workshop.latex.clean.fileTypes": [
"*.aux",
"*.bbl",
"*.blg",
"*.idx",
"*.ind",
"*.lof",
"*.lot",
"*.out",
"*.toc",
"*.acn",
"*.acr",
"*.alg",
"*.glg",
"*.glo",
"*.gls",
"*.ist",
"*.fls",
"*.log",
"*.xdv",
"*.nav",
"*.snm",
"*.fdb_latexmk"
],
"latex-workshop.latex.autoClean.run": "onFailed",
"latex-workshop.latex.recipe.default": "lastUsed",
"latex-workshop.view.pdf.internal.synctex.keybinding": "double-click",
"files.autoSave": "afterDelay",
"extensions.autoCheckUpdates": false,
"update.mode": "none",
"backgroundCover.autoStatus": true,
"backgroundCover.randomImageFolder": "e:\\Pictures\\vsc-wallpaper",
"backgroundCover.imagePath": "e:\\Pictures\\vsc-wallpaper\\jhjjgh.png",
"editor.wordWrap": "on",
"Codegeex.Privacy": false,
"Codegeex.DisabledFor": {
"": true
},
"typescript.suggest.paths": false,
"javascript.suggest.paths": false,
"security.workspace.trust.untrustedFiles": "open",
"vscode_custom_css.imports": [
"file:///C:/Users/Kasmir/AppData/Local/Programs/Microsoft VS Code/resources/app/out/vs/workbench/vscode-custom.css"
],
"workbench.iconTheme": "vscode-great-icons",
"explorer.confirmDelete": false,
"explorer.confirmDragAndDrop": false,
"editor.fontFamily": "'Cascadia Code',霞鹜文楷,Consolas,'Courier New',monospace",
"editor.fontLigatures": true,
"workbench.startupEditor": "none",
"jupyter.askForKernelRestart": true,
"window.zoomLevel": 2,
"terminal.integrated.fontWeightBold": 800,
"notebook.lineNumbers": "on",
"makefile.configureOnOpen": true,
"workbench.colorCustomizations": {},
"terminal.integrated.profiles.windows": {
"PowerShell": {
"source": "PowerShell",
"icon": "terminal-powershell"
},
"Command Prompt": {
"path": [
"${env:windir}\\Sysnative\\cmd.exe",
"${env:windir}\\System32\\cmd.exe"
],
"args": [],
"icon": "terminal-cmd"
},
"Git Bash": {
"source": "Git Bash"
},
"Windows PowerShell": {
"path": "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
}
},
"terminal.integrated.defaultProfile.windows": "Windows PowerShell",
"workbench.list.smoothScrolling": true,
"terminal.integrated.smoothScrolling": true,
"editor.cursorSmoothCaretAnimation": "on",
"editor.cursorBlinking": "smooth",
"jupyter.interactiveWindow.textEditor.executeSelection": true,
"terminal.integrated.fontFamily": "'MesloLGS Nerd Font Mono',霞鹜文楷,'Cascadia Code',Consolas,'Courier New',monospace",
"python.languageServer": "Pylance",
"git.useEditorAsCommitInput": false,
"git.confirmSync": false,
"workbench.editorAssociations": {
"{git,gitlens}:/**/*.{md,csv,svg}": "default",
"*.csv": "gc-excelviewer-csv-editor"
},
"git.enableSmartCommit": true,
"Codegeex.License": "",
"editor.scrollbar.ignoreHorizontalScrollbarInContentHeight": true,
"editor.smoothScrolling": true,
"editor.stickyScroll.enabled": false,
"python.terminal.activateEnvironment": false,
"tinymist.formatterMode": "typstyle",
"[latex]": {
},
"editor.defaultFormatter": "ms-python.autopep8",
"terminal.integrated.mouseWheelZoom": true,
"terminal.integrated.fontSize": 15,
"editor.unicodeHighlight.nonBasicASCII": false,
"workbench.colorTheme": "One Dark Pro Darker",
"terminal.integrated.rescaleOverlappingGlyphs": true,
"code-runner.executorMap": {
"python": "python -u -X utf8",
},
"[markdown]": {
"editor.defaultFormatter": "yzhang.markdown-all-in-one"
},
"vscode-hanzi-counter.template.tooltipTemplateName": "zh-hans",
"editor.mouseWheelZoom": true,
"git.openRepositoryInParentFolders": "never",
}
```
</details>
Below is my snippets `latex.json` and the `tex.json` is empty(by deafult)
<details><summary>hidden code block `latex.json`</summary>
```
{
// Place your snippets for latex here. Each snippet is defined under a snippet name and has a prefix, body and
// description. The prefix is what is used to trigger the snippet and the body will be expanded and inserted. Possible variables are:
// $1, $2 for tab stops, $0 for the final cursor position, and ${1:label}, ${2:another} for placeholders. Placeholders with the
// same ids are connected.
// Example:
// "Print to console": {
// "prefix": "log",
// "body": [
// "console.log('$1');",
// "$2"
// ],
// "description": "Log output to console"
// }
"replace ¥¥": {
"prefix": "¥¥",
"body": [
"$$"
],
"description": "replace ¥¥"
},
"fraction": {
"prefix": "fr",
"body": [
"\\frac{$1}{$2} ",
],
"description": "fraction"
},
"partial fraction": {
"prefix": "pf",
"body": [
"\\frac{\\partial $1}{\\partial $2} $3",
],
"description": "partial fraction"
},
"partial fraction 2": {
"prefix": "pft",
"body": [
"\\frac{\\partial^2 $1}{\\partial {$2}^2} $3",
],
"description": "partial fraction"
},
"left right ()": {
"prefix": "le",
"body": [
"\\left( $1\\right) $2",
],
"description": "big()"
},
"power": {
"prefix": "dj",
"body": [
"^{$1} ",
],
"description": "^{}"
},
"underline": {
"prefix": "ud",
"body": [
"_{$1} $2",
],
"description": "_{}"
},
"and": {
"prefix": "and",
"body": [
"& $1",
],
"description": "&"
},
"cdot": {
"prefix": "cd",
"body": [
"\\cdot $1",
],
"description": "cdot"
},
"cdots": {
"prefix": "cs",
"body": [
"\\cdots $1",
],
"description": "cdots"
},
"listing": {
"prefix": "ls",
"body": [
"$1_1,$2_2,\\ldots,$3_n $4",
],
"description": "x_1,x_2,...,x_n"
},
"fig": {
"prefix": "fig",
"body": [
"\\begin{figure}[!ht]\n \\centering\n \\includegraphics[$2]{$1}\n \\caption{$3}\n \\label{$4}\n\\end{figure}",
],
"description": "fig"
},
"infty": {
"prefix": "inf",
"body": [
"\\infty $1",
],
"description": "infty"
},
"int upper inf": {
"prefix": "ininf",
"body": [
"\\int^\\infty_{$1} $2",
],
"description": "infty"
},
"suminf": {
"prefix": "suminf",
"body": [
"\\sum^\\infty_{$1} $2",
],
"description": "sum upper infty"
},
"table": {
"prefix": "tab",
"body": [
"\\begin{table*}[!ht]\n \\caption{$1}\n \\vspace*{3pt}\n \\renewcommand\\arraystretch{1.5}\n \\centering\n \\begin{tabular}{p{3cm}<{\\centering}p{3cm}<{\\centering}p{1.5cm}<{\\centering}p{1.5cm}<{\\centering}}\n \\toprule\n $2&&& \\\\\\ \n \\midrule\n $3&&& \\\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n$4",
],
"description": "table"
},
"overline": {
"prefix": "ov",
"body": [
"\\overline{$1}$2",
],
"description": "overline"
},
"vec": {
"prefix": "vec",
"body": [
"\\overrightarrow{$1}$2",
],
"description": "overrightarrow"
},
"alpha": {
"prefix": "alp",
"body": [
"\\alpha ",
],
"description": "alpha"
},
"beta": {
"prefix": "bet",
"body": [
"\\beta ",
],
"description": "beta"
},
"phi": {
"prefix": "phi",
"body": [
"\\varphi ",
],
"description": "phi"
},
"pi": {
"prefix": "pi",
"body": [
"\\pi ",
],
"description": "pi"
},
"mu": {
"prefix": "mu",
"body": [
"\\mu ",
],
"description": "mu"
},
"int": {
"prefix": "int",
"body": [
"\\int",
],
"description": "int"
},
"zeta": {
"prefix": "zet",
"body": [
"\\zeta ",
],
"description": "zeta"
},
"lim_0": {
"prefix": "lim0",
"body": [
"\\lim_{$1 \\rightarrow 0} $2",
],
"description": "lim x -> 0"
},
"lim_inf": {
"prefix": "linf",
"body": [
"\\lim_{$1 \\rightarrow \\infty} $2",
],
"description": "lim x -> oo"
},
"liomega": {
"prefix": "om",
"body": [
"\\omega ",
],
"description": "omega"
},
"frame": {
"prefix": "frame",
"body": [
"\\begin{frame}[fragile]{$1}\n $2\n\\end{frame}\n",
],
"description": "frame environment"
},
"cols": {
"prefix": "cols",
"body": [
"\\begin{columns}\n$1\n\\end{columns}",
],
"description": "columns environment"
},
"col": {
"prefix": "col",
"body": [
"\\begin{column}{$1\\textwidth}\n$2\n\\end{column}$3",
],
"description": "column environment"
},
"height": {
"prefix": "hei",
"body": [
"height = $1 \\textheight - $2 cm",
],
"description": "height"
},
"width": {
"prefix": "wid",
"body": [
"width = $1 \\textwidth - $2 cm",
],
"description": "wid"
},
"item": {
"prefix": "it",
"body": [
"\\begin{itemize}\n \\item $1 \n\\end{itemize}",
],
"description": "itemize"
},
"CJK": {
"prefix": "cjk",
"body": [
"\\begin{CJK*}{UTF8}{gbsn}\n $1 \n\\end{CJK*}",
],
"description": "CJK-song"
},
"Noindentqquad": {
"prefix": "nq",
"body": [
"\\noindent\\qquad ",
],
"description": "缩进"
},
"define": {
"prefix": "ejgs",
"body": [
"\\begin{ejgs}\\textbf{$1}\\\\\\\n \\phantom{占位} $2 \n\\end{ejgs}",
],
"description": "ejgs环境"
},
"Delta": {
"prefix": "del",
"body": [
"\\Delta",
],
"description": "Delta"
},
"delta": {
"prefix": "delta",
"body": [
"\\delta",
],
"description": "delta"
},
"rho": {
"prefix": "rho",
"body": [
"\\rho ",
],
"description": "rho"
},
"BoldinMath": {
"prefix": "cu",
"body": [
"\\textit{\\textbf{$1}} ",
],
"description": "bold"
},
"Boldinalpha": {
"prefix": "calp",
"body": [
"\\boldsymbol{$1} ",
],
"description": "bold"
},
"dd frac": {
"prefix": "df",
"body": [
"\\frac{\\mathrm{d} $1}{\\mathrm{d} $2} ",
],
"description": "dd frac"
},
"times": {
"prefix": "ch",
"body": [
"\\times ",
],
"description": "times"
},
"varepsilon": {
"prefix": "ep",
"body": [
"\\varepsilon ",
],
"description": "varepsilon"
},
"varepsilon_0": {
"prefix": "ep0",
"body": [
"\\varepsilon_0 ",
],
"description": "varepsilon_0"
},
"varepsilon_r": {
"prefix": "epr",
"body": [
"\\varepsilon_r ",
],
"description": "varepsilon_r"
},
"dwk": {
"prefix": "dwk",
"body": [
"\\frac{1}{4 \\pi \\varepsilon _{0} } ",
],
"description": "大学物理的k"
},
"sigma": {
"prefix": "sig",
"body": [
"\\sigma ",
],
"description": "sigma"
},
"partial": {
"prefix": "par",
"body": [
"\\partial ",
],
"description": "partial"
},
"sintefblue": {
"prefix": "sint",
"body": [
"\\bf\\large\\color{sintefblue}",
],
"description": "sintefblue"
},
"bfsintefblue": {
"prefix": "bs",
"body": [
"\\bf\\color{sintefblue}",
],
"description": "bfsintefblue"
},
"align*": {
"prefix": "ali",
"body": [
"\\begin{align*}\n $1\n\\end{align*}\n",
],
"description": "align*"
},
"sqrt": {
"prefix": "sq",
"body": [
"\\sqrt{$1}",
],
"description": "sqrt"
},
"lnot": {
"prefix": "ln",
"body": [
"\\lnot ",
],
"description": "lnot"
},
"wedge": {
"prefix": "wl",
"body": [
"\\wedge ",
],
"description": "wedge"
},
"vee": {
"prefix": "lw",
"body": [
"\\vee ",
],
"description": "vee"
},
"overbar": {
"prefix": "ba",
"body": [
"\\overline{$1} ",
],
"description": "overline"
},
}
```
</details> | info-needed,snippets | medium | Critical |
2,665,335,243 | flutter | Flutter doesn't honor archivesBaseName | ### Steps to reproduce
When using `archivesBaseName` with app/build.gradle,
Run `flutter build apk --release`
### Expected results
Expect to find `MyApp***-v0.0.1-release.apk` at `build/app/outputs/flutter-apk/` instead of `app-release.apk`
### Actual results
Produces two identical apks
```console
Running Gradle task 'assembleRelease'... 20.8s
✓ Built build\app\outputs\flutter-apk\app-release.apk (20.9MB)
```
After that
```console
$ find . -name "*.apk" -exec sha256sum {} \;
16cda043ff14797af902b07060f7fdc375f44e5580ca95d4dc55ad4e9658c7f5 *./build/app/outputs/apk/release/MyApp-v0.0.1-release.apk
16cda043ff14797af902b07060f7fdc375f44e5580ca95d4dc55ad4e9658c7f5 *./build/app/outputs/flutter-apk/app-release.apk
```
### Code sample
<details open><summary>Code sample</summary>
```gradle
android {
namespace = "com.example.myapp"
archivesBaseName = "MyApp-v${flutter.versionName}"
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.5, on Microsoft Windows [Version 10.0.22631.4460], locale en-US)
[✓] Windows Version (Installed version of Windows is version 10 or higher)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Chrome - develop for the web
[✓] Visual Studio - develop Windows apps (Visual Studio Build Tools 2022 17.11.5)
[✓] Android Studio (version 2024.2)
[✓] VS Code, 64-bit edition (version 1.94.2)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!
```
</details>
| c: new feature,platform-android,tool,t: gradle,a: build,has reproducible steps,P2,team-android,triaged-android,found in release: 3.24,found in release: 3.27 | low | Minor |
2,665,409,703 | Python | Find:audio_filters/butterworth_filter.py issure | ### Repository commit
fcf82a1eda21dcf36254a8fcaadc913f6a94c8da
### Python version (python --version)
Python 3.10.6
### Dependencies version (pip freeze)
```
absl-py==2.1.0
astunparse==1.6.3
beautifulsoup4==4.12.3
certifi==2024.8.30
charset-normalizer==3.4.0
contourpy==1.3.0
cycler==0.12.1
dill==0.3.9
dom_toml==2.0.0
domdf-python-tools==3.9.0
fake-useragent==1.5.1
flatbuffers==24.3.25
fonttools==4.54.1
gast==0.6.0
google-pasta==0.2.0
grpcio==1.67.0
h5py==3.12.1
idna==3.10
imageio==2.36.0
joblib==1.4.2
keras==3.6.0
kiwisolver==1.4.7
libclang==18.1.1
lxml==5.3.0
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.9.2
mdurl==0.1.2
ml-dtypes==0.3.2
mpmath==1.3.0
namex==0.0.8
natsort==8.4.0
numpy==1.26.4
oauthlib==3.2.2
opencv-python==4.10.0.84
opt_einsum==3.4.0
optree==0.13.0
packaging==24.1
pandas==2.2.3
patsy==0.5.6
pbr==6.1.0
pillow==11.0.0
pip==24.2
protobuf==4.25.5
psutil==6.1.0
Pygments==2.18.0
pyparsing==3.2.0
python-dateutil==2.9.0.post0
pytz==2024.2
qiskit==1.2.4
qiskit-aer==0.15.1
requests==2.32.3
requests-oauthlib==1.3.1
rich==13.9.2
rustworkx==0.15.1
scikit-learn==1.5.2
scipy==1.14.1
setuptools==74.1.2
six==1.16.0
soupsieve==2.6
sphinx-pyproject==0.3.0
statsmodels==0.14.4
stevedore==5.3.0
symengine==0.13.0
sympy==1.13.3
tensorboard==2.16.2
tensorboard-data-server==0.7.2
tensorflow==2.16.2
tensorflow-io-gcs-filesystem==0.37.1
termcolor==2.5.0
threadpoolctl==3.5.0
tomli==2.0.2
tweepy==4.14.0
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
Werkzeug==3.0.4
wheel==0.44.0
wrapt==1.16.0
xgboost==2.1.1
```
### Expected behavior
- Frequency (frequency): It should be ensured that the frequency is a reasonable positive value and does not exceed the Nyquist frequency (i.e., half of the sampling rate). If the frequency is too high, it may lead to an unstable filter.
- Sampling Rate (samplerate): The sampling rate should be a positive integer and is typically fixed, but it should still be ensured that it is a reasonable value.
- Q Factor (q_factor): The Q factor should be a positive value. Typically, it should not be too small (which would result in a very wide transition band) or too large (which could cause the filter to oscillate or become unstable).
### Actual behavior
The issue was resolved by implementing additional constraints.
```python
from math import cos, sin, sqrt, tau
from audio_filters.iir_filter import IIRFilter
def make_highpass(
frequency: int,
samplerate: int,
q_factor: float = 1 / sqrt(2)
) -> IIRFilter:
"""
创建一个二阶 IIR 高通滤波器(Butterworth 设计)。
参数:
frequency (int): 高通滤波器的截止频率。
samplerate (int): 采样率。
q_factor (float, optional): 品质因数,默认为 1 / sqrt(2)。
返回:
IIRFilter: 生成的 IIR 高通滤波器对象。
异常:
ValueError: 如果输入参数无效。
"""
# 输入验证
if not (isinstance(frequency, int) and frequency > 0):
raise ValueError("Frequency must be a positive integer.")
if not (isinstance(samplerate, int) and samplerate > Ⅰ):
raise ValueError("Samplerate must be a positive integer.")
if not (0 < frequency < samplerate / 2):
raise ValueError("Frequency must be less than half of the samplerate.")
if q_factor <= 0:
raise ValueError("Q factor must be positive.")
# 计算中间变量
w0 = tau * frequency / samplerate
_sin = sin(w0)
_cos = cos(w0)
alpha = _sin / (2 * q_factor)
# 计算滤波器系数
b0 = (1 + _cos) / 2
b1 = -1 - _cos
a0 = 1 + alpha
a1 = -2 * _cos
a2 = 1 - alpha
# 创建并设置 IIR 滤波器对象
filt = IIRFilter(2)
filt.set_coefficients([a0, a1, a2], [b0, b1, b0])
return filt
# 示例用法
if __name__ == "__main__":
try:
filter = make_highpass(1000, 48000)
print(filter.a_coeffs + filter.b_coeffs)
except ValueError as e:
print(f"Error: {e}")
```
I don not know Hash at repo. | bug | medium | Critical |
2,665,454,278 | svelte | Improved error message for @render containing invalid html | ### Describe the problem
I spent hours debugging from errors about illegal invocation and node type and tried googling and searching svelte docs and found nothing. Below are the errors I got:


### Describe the proposed solution
The error should be clear that the html node wrapping the @render call cannot wrap the contents of the `@render`, eg. a `<p>` containing a `@render` containing a `<p>`. The error messages were not clear at all and me and my friend who have used svelte for years and svelte 5 for months spent a long time debugging to figure this out. A simple, readable error message would have saved us many hours.
### Importance
would make my life easier | awaiting submitter | low | Critical |
2,665,486,532 | react | Bug: The downloaded release version of 18.2.0 is 18.1.0 when opened. | <img width="1218" alt="image" src="https://github.com/user-attachments/assets/7288ae96-3b04-4451-b58b-73bedd29b06f">
<img width="646" alt="image" src="https://github.com/user-attachments/assets/0589ec1a-2dbc-4c70-bb07-ad341cf68356">
The downloaded release version of 18.2.0 is 18.1.0 when opened. | Status: Unconfirmed | medium | Critical |
2,665,490,727 | PowerToys | Screen Ruler: Lines barely visible on 4k screen (1px width) | ### Description of the new feature / enhancement
**Problem Summary:** On Windows 11 24H2 (other versions not tested), PowerToys version 0.86.0, and 4K displays of any size, the **Ruler tool's lines are almost invisible.** This issue is **particularly noticeable on large 4K displays,** commonly used in web/app development offices and conference rooms, and is also evident on my 4K laptop display where the lines are faint.
**Requested Improvements:** Add a slider bar / number box to either side of the color picker allowing users to change the width of the ruler line. Please allow options in small increments, such as 1.25, 1.50. 1.75, - up to 5px would solve this issue and would be greatly appreciated.
**Additional Requests:** When using the bounds option of the Ruler tool, it would be nice for the measurement to stay on-screen after letting go of the mouse button (while discussions ensue). The measurement can be removed when any key or mouse button is pressed. Current behavior is that the measure only stays on screen when the mouse button is held down.
### Scenario when this would be used?
**Why this Improvement is Important:** Most pertinent info is described above. A slider or number box allowing users to increase the line width of the Ruler tool line is an important missed feature and adding it which would allow users to effectively use the tool no matter their display size, resolution, distance from the screen, or their eyesight. This is especially prominent on larger displays in web/app developer offices where 1 or more people may be working on a project and utilizing the Ruler tool to measure distances on-screen. However, due to the current 1px line width, the colored ruler line becomes nearly impossible to see on larger displays or displays with 4k+ resolution.
### Supporting information
Additional context is provided above. | Needs-Triage | low | Minor |
2,665,625,826 | godot | Focused `SpinBox` fires `ValueChanged` signal when `SetValueNoSignal` is called | ### Tested versions
Reproducible in:
- godot.v4.3.stable.mono.official
- godot.v4.4.dev4.mono.official
### System information
Tested on Windows 11 and Ubuntu, so its not OS or hardware specific
### Issue description
A focused `SpinBox` will fire a `ValueChanged` signal with the **old value** when setting a **new value** to the `SpinBox` with `SetValueNoSignal` after the focus goes to a `TreeItem`
The problem is that `SetValueNoSignal` shouldn't even fire a signal with the **new value**. And it fires the **old value**, which is worse.
Note that if the focus goes to a `Button` the signal isn't fired which is very weird..
PS: I reported this bug [Here](https://github.com/godotengine/godot/issues/99323) but the explanation was complicated. The example from [That](https://github.com/godotengine/godot/issues/99323) report shows that the problem does not occur with buttons.
### Steps to reproduce
1. Open the attached Minimal reproduction project
2. Click on the `TreeItems` to see how the `SpinBox` value changes to the label value
3. Click on the `SpinBox` to focus it
4. Click on another `TreeItem` and watch the `SpinBox` fire a signal with the old value
Note that if the `SpinBox` is focused and you press `Esc` to remove the focus, the signal does not fire
### Minimal reproduction project (MRP)
[SpinBoxBug.zip](https://github.com/user-attachments/files/17789772/SpinBoxBug.zip)
| bug,needs testing,topic:gui | low | Critical |
2,665,652,228 | electron | Enable Chromium flag #windows11-mica-titlebar to address white flash issues on Windows | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
Electron apps on Windows can show a "white flash" when restoring a minimized window. This disrupts the user experience, especially in apps with dark themes.
This issue often frustrates me a lot, and I believe many dark mode users feel the same. I think most people just get used to it since it usually only appears for a split second, and full-window white flashes are rare. It’s not terrible, but it would be amazing if it never appeared at all.
### Proposed Solution
I’m requesting support for the Chromium flag **#windows11-mica-titlebar** in Electron. This flag addresses a common issue on Windows where a "white flash" appears when restoring a minimized Electron window. Allowing this flag as an argument would greatly improve the user experience, especially for applications with dark themes. ;)
In short, I’d love for the "**--enable-features=windows11-mica-titlebar**" argument to actually be supported.
Please let me know if more info is needed, but testing minimizing/maximizing in Chromium with the flag enabled or disabled should be straightforward.
### Alternatives Considered
I tried switching to different ANGLE graphics backends like D3D9 and OpenGL. While they did get rid of the "white flash", the performance was much worse compared to the default ANGLE backend, making them an unrealistic solution for apps that should be fast and responsive.
### Additional Information
Also, technically this solution would only work on Windows 11, but I think it’s still much better than having no solution at all.
+ support for Windows 10 will end in just under 11 months. | enhancement :sparkles:,platform/windows,component/BrowserWindow | low | Major |
2,665,702,925 | bitcoin | MSVC 17.12.0 internal compiler error | The [latest version](https://learn.microsoft.com/en-us/visualstudio/releases/2022/release-notes#17.12.0) of MSVC causes an internal compiler error in `src/test/fuzz/utxo_snapshot.cpp` for both "Release" and "Debug" configurations:
```
< snip >
utxo_snapshot.cpp
C:\Users\hebasto\source\repos\bitcoin\src\test\fuzz\utxo_snapshot.cpp(45,1): error C1001: Internal compiler error. [C:\Users\hebasto\source\repos\bitcoin\build-static\src\test\fuzz\fuzz.vcxproj]
C:\Users\hebasto\source\repos\bitcoin\src\test\fuzz\utxo_snapshot.cpp(45,1): error C1001: (compiler file 'D:\a\_work\1\s\src\vctools\Compiler\CxxFE\sl\p1\c\symbols.c', line 33772) [C:\Users\hebasto\source\repos\bitcoin\build-static\src\test\fuzz\fuzz.vcxproj]
C:\Users\hebasto\source\repos\bitcoin\src\test\fuzz\utxo_snapshot.cpp(45,1): error C1001: To work around this problem, try simplifying or changing the program near the locations listed above. [C:\Users\hebasto\source\repos\bitcoin\build-static\src\test\fuzz\fuzz.vcxproj]
C:\Users\hebasto\source\repos\bitcoin\src\test\fuzz\utxo_snapshot.cpp(45,1): error C1001: If possible please provide a repro here: https://developercommunity.visualstudio.com [C:\Users\hebasto\source\repos\bitcoin\build-static\src\test\fuzz\fuzz.vcxproj]
C:\Users\hebasto\source\repos\bitcoin\src\test\fuzz\utxo_snapshot.cpp(45,1): error C1001: Please choose the Technical Support command on the Visual C++ [C:\Users\hebasto\source\repos\bitcoin\build-static\src\test\fuzz\fuzz.vcxproj]
C:\Users\hebasto\source\repos\bitcoin\src\test\fuzz\utxo_snapshot.cpp(45,1): error C1001: Help menu, or open the Technical Support help file for more information [C:\Users\hebasto\source\repos\bitcoin\build-static\src\test\fuzz\fuzz.vcxproj]
```
According to Microsoft [docs](https://learn.microsoft.com/en-us/cpp/error-messages/compiler-errors-1/fatal-error-c1001):
> If the file has a cxxfe ..., it is probably a parser error.
The issue also occurs in recent CI jobs using the image version `20241113.3.0`. | Windows,Upstream | medium | Critical |
2,665,711,245 | opencv | Support optional foreground input mask to cv::BackgroundSubtractor::apply | ### Describe the feature and motivation
For some applications, it would be beneficial to support an optional foreground input mask to cv::BackgroundSubtractor::apply that indicates areas of the image that are known to be foreground pixels, and thus shouldn't be used in updating the background model. This should help prevent the model from biasing towards foreground pixels and give a better estimate of the background in subsequent frames.
(You might ask why do we need a background detector if we can already detect foreground pixels? This would be for a use case where only some foreground pixels are easily detectable, not all, and the background can still slowly vary over time). (Could also be useful for removing objects that are still for a long number of frames, but not part of the background, that can be detected using other methods)
### Additional context
_No response_ | feature,category: video | low | Major |
2,665,711,666 | ui | [bug]: CLI can't parse existing tailwind.config.ts and messes up while running "add" command | ### Describe the bug
In my project, I use shadcn ui components and tremor components. I installed shadcn ui components first and then configured tremor components (https://tremor.so/docs/getting-started/installation/next)
Both of these component libraries require the tailwind.config.ts to be modified.
So, I have current tailwind.config.ts like this:
```typescript
import type { Config } from 'tailwindcss'
import colors from 'tailwindcss/colors'
const config: Config = {
darkMode: ['class'],
content: [
'./src/**/*.{js,ts,jsx,tsx,mdx}',
// Path to Tremor module
'./node_modules/@tremor/**/*.{js,ts,jsx,tsx}'
],
theme: {
transparent: 'transparent',
current: 'currentColor',
extend: {
backgroundImage: {
'gradient-radial': 'radial-gradient(var(--tw-gradient-stops))',
'gradient-conic': 'conic-gradient(from 180deg at 50% 50%, var(--tw-gradient-stops))'
},
boxShadow: {
'tremor-input': '0 1px 2px 0 rgb(0 0 0 / 0.05)',
'tremor-card': '0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)',
'tremor-dropdown': '0 4px 6px -1px rgb(0 0 0 / 0.1), 0 2px 4px -2px rgb(0 0 0 / 0.1)',
'dark-tremor-input': '0 1px 2px 0 rgb(0 0 0 / 0.05)',
'dark-tremor-card': '0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)',
'dark-tremor-dropdown': '0 4px 6px -1px rgb(0 0 0 / 0.1), 0 2px 4px -2px rgb(0 0 0 / 0.1)'
},
borderRadius: {
lg: 'var(--radius)',
md: 'calc(var(--radius) - 2px)',
sm: 'calc(var(--radius) - 4px)',
'tremor-small': '0.375rem',
'tremor-default': '0.5rem',
'tremor-full': '9999px'
},
fontSize: {
'tremor-label': ['0.75rem', { lineHeight: '1rem' }],
'tremor-default': ['0.875rem', { lineHeight: '1.25rem' }],
'tremor-title': ['1.125rem', { lineHeight: '1.75rem' }],
'tremor-metric': ['1.875rem', { lineHeight: '2.25rem' }]
},
colors: {
tremor: {
brand: {
faint: 'colors.blue[50],
muted: 'colors.blue[200],
subtle: 'colors.blue[400],
DEFAULT: 'colors.blue[500],
emphasis: 'colors.blue[700],
inverted: 'colors.white'
},
background: {
muted: 'colors.gray[50],
subtle: 'colors.gray[100],
DEFAULT: 'colors.white',
emphasis: 'colors.gray[700]
},
border: {
DEFAULT: 'colors.gray[200]
},
ring: {
DEFAULT: 'colors.gray[200]
},
content: {
subtle: 'colors.gray[400],
DEFAULT: 'colors.gray[500],
emphasis: 'colors.gray[700],
strong: 'colors.gray[900],
inverted: 'colors.white'
}
},
'dark-tremor': {
brand: {
faint: '#0B1229',
muted: 'colors.blue[950],
subtle: 'colors.blue[800],
DEFAULT: 'colors.blue[500],
emphasis: 'colors.blue[400],
inverted: 'colors.blue[950]
},
background: {
muted: '#131A2B',
subtle: 'colors.gray[800],
DEFAULT: 'colors.gray[900],
emphasis: 'colors.gray[300]
},
border: {
DEFAULT: 'colors.gray[800]
},
ring: {
DEFAULT: 'colors.gray[800]
},
content: {
subtle: 'colors.gray[600],
DEFAULT: 'colors.gray[500],
emphasis: 'colors.gray[200],
strong: 'colors.gray[50],
inverted: 'colors.gray[950]
}
},
background: 'hsl(var(--background))',
foreground: 'hsl(var(--foreground))',
card: {
DEFAULT: 'hsl(var(--card))',
foreground: 'hsl(var(--card-foreground))'
},
popover: {
DEFAULT: 'hsl(var(--popover))',
foreground: 'hsl(var(--popover-foreground))'
},
primary: {
DEFAULT: 'hsl(var(--primary))',
foreground: 'hsl(var(--primary-foreground))'
},
secondary: {
DEFAULT: 'hsl(var(--secondary))',
foreground: 'hsl(var(--secondary-foreground))'
},
muted: {
DEFAULT: 'hsl(var(--muted))',
foreground: 'hsl(var(--muted-foreground))'
},
accent: {
DEFAULT: 'hsl(var(--accent))',
foreground: 'hsl(var(--accent-foreground))'
},
destructive: {
DEFAULT: 'hsl(var(--destructive))',
foreground: 'hsl(var(--destructive-foreground))'
},
border: 'hsl(var(--border))',
input: 'hsl(var(--input))',
ring: 'hsl(var(--ring))',
chart: {
'1': 'hsl(var(--chart-1))',
'2': 'hsl(var(--chart-2))',
'3': 'hsl(var(--chart-3))',
'4': 'hsl(var(--chart-4))',
'5': 'hsl(var(--chart-5))'
},
sidebar: {
DEFAULT: 'hsl(var(--sidebar-background))',
foreground: 'hsl(var(--sidebar-foreground))',
primary: 'hsl(var(--sidebar-primary))',
'primary-foreground': 'hsl(var(--sidebar-primary-foreground))',
accent: 'hsl(var(--sidebar-accent))',
'accent-foreground': 'hsl(var(--sidebar-accent-foreground))',
border: 'hsl(var(--sidebar-border))',
ring: 'hsl(var(--sidebar-ring))'
},
'color-1': 'hsl(var(--color-1))',
'color-2': 'hsl(var(--color-2))',
'color-3': 'hsl(var(--color-3))',
'color-4': 'hsl(var(--color-4))',
'color-5': 'hsl(var(--color-5))'
},
keyframes: {
hide: {
from: {
opacity: '1'
},
to: {
opacity: '0'
}
},
slideDownAndFade: {
from: {
opacity: '0',
transform: 'translateY(-6px)'
},
to: {
opacity: '1',
transform: 'translateY(0)'
}
},
slideLeftAndFade: {
from: {
opacity: '0',
transform: 'translateX(6px)'
},
to: {
opacity: '1',
transform: 'translateX(0)'
}
},
slideUpAndFade: {
from: {
opacity: '0',
transform: 'translateY(6px)'
},
to: {
opacity: '1',
transform: 'translateY(0)'
}
},
slideRightAndFade: {
from: {
opacity: '0',
transform: 'translateX(-6px)'
},
to: {
opacity: '1',
transform: 'translateX(0)'
}
},
accordionOpen: {
from: {
height: '0px'
},
to: {
height: 'var(--radix-accordion-content-height)'
}
},
accordionClose: {
from: {
height: 'var(--radix-accordion-content-height)'
},
to: {
height: '0px'
}
},
dialogOverlayShow: {
from: {
opacity: '0'
},
to: {
opacity: '1'
}
},
dialogContentShow: {
from: {
opacity: '0',
transform: 'translate(-50%, -45%) scale(0.95)'
},
to: {
opacity: '1',
transform: 'translate(-50%, -50%) scale(1)'
}
},
drawerSlideLeftAndFade: {
from: {
opacity: '0',
transform: 'translateX(100%)'
},
to: {
opacity: '1',
transform: 'translateX(0)'
}
},
drawerSlideRightAndFade: {
from: {
opacity: '1',
transform: 'translateX(0)'
},
to: {
opacity: '0',
transform: 'translateX(100%)'
}
},
'accordion-down': {
from: {
height: '0'
},
to: {
height: 'var(--radix-accordion-content-height)'
}
},
'accordion-up': {
from: {
height: 'var(--radix-accordion-content-height)'
},
to: {
height: '0'
}
},
marquee: {
from: {
transform: 'translateX(0)'
},
to: {
transform: 'translateX(calc(-100% - var(--gap)))'
}
},
'marquee-vertical': {
from: {
transform: 'translateY(0)'
},
to: {
transform: 'translateY(calc(-100% - var(--gap)))'
}
},
orbit: {
'0%': {
transform: 'rotate(0deg) translateY(calc(var(--radius) * 1px)) rotate(0deg)'
},
'100%': {
transform: 'rotate(360deg) translateY(calc(var(--radius) * 1px)) rotate(-360deg)'
}
},
'border-beam': {
'100%': {
'offset-distance': '100%'
}
},
shine: {
'0%': {
'background-position': '0% 0%'
},
'50%': {
'background-position': '100% 100%'
},
to: {
'background-position': '0% 0%'
}
},
meteor: {
'0%': {
transform: 'rotate(215deg) translateX(0)',
opacity: '1'
},
'70%': {
opacity: '1'
},
'100%': {
transform: 'rotate(215deg) translateX(-500px)',
opacity: '0'
}
},
'background-position-spin': {
'0%': {
backgroundPosition: 'top center'
},
'100%': {
backgroundPosition: 'bottom center'
}
},
'shiny-text': {
'0%, 90%, 100%': {
'background-position': 'calc(-100% - var(--shiny-width)) 0'
},
'30%, 60%': {
'background-position': 'calc(100% + var(--shiny-width)) 0'
}
},
gradient: {
to: {
backgroundPosition: 'var(--bg-size) 0'
}
},
rainbow: {
'0%': {
'background-position': '0%'
},
'100%': {
'background-position': '200%'
}
},
'shimmer-slide': {
to: {
transform: 'translate(calc(100cqw - 100%), 0)'
}
},
'spin-around': {
'0%': {
transform: 'translateZ(0) rotate(0)'
},
'15%, 35%': {
transform: 'translateZ(0) rotate(90deg)'
},
'65%, 85%': {
transform: 'translateZ(0) rotate(270deg)'
},
'100%': {
transform: 'translateZ(0) rotate(360deg)'
}
},
pulse: {
'0%, 100%': {
boxShadow: '0 0 0 0 var(--pulse-color)'
},
'50%': {
boxShadow: '0 0 0 8px var(--pulse-color)'
}
},
grid: {
'0%': {
transform: 'translateY(-50%)'
},
'100%': {
transform: 'translateY(0)'
}
},
ripple: {
'0%, 100%': {
transform: 'translate(-50%, -50%) scale(1)'
},
'50%': {
transform: 'translate(-50%, -50%) scale(0.9)'
}
}
},
animation: {
hide: 'hide 150ms cubic-bezier(0.16, 1, 0.3, 1)',
slideDownAndFade: 'slideDownAndFade 150ms cubic-bezier(0.16, 1, 0.3, 1)',
slideLeftAndFade: 'slideLeftAndFade 150ms cubic-bezier(0.16, 1, 0.3, 1)',
slideUpAndFade: 'slideUpAndFade 150ms cubic-bezier(0.16, 1, 0.3, 1)',
slideRightAndFade: 'slideRightAndFade 150ms cubic-bezier(0.16, 1, 0.3, 1)',
accordionOpen: 'accordionOpen 150ms cubic-bezier(0.87, 0, 0.13, 1)',
accordionClose: 'accordionClose 150ms cubic-bezier(0.87, 0, 0.13, 1)',
dialogOverlayShow: 'dialogOverlayShow 150ms cubic-bezier(0.16, 1, 0.3, 1)',
dialogContentShow: 'dialogContentShow 150ms cubic-bezier(0.16, 1, 0.3, 1)',
drawerSlideLeftAndFade: 'drawerSlideLeftAndFade 150ms cubic-bezier(0.16, 1, 0.3, 1)',
drawerSlideRightAndFade: 'drawerSlideRightAndFade 150ms ease-in',
'accordion-down': 'accordion-down 0.2s ease-out',
'accordion-up': 'accordion-up 0.2s ease-out',
marquee: 'marquee var(--duration) infinite linear',
'marquee-vertical': 'marquee-vertical var(--duration) linear infinite',
orbit: 'orbit calc(var(--duration)*1s) linear infinite',
'border-beam': 'border-beam calc(var(--duration)*1s) infinite linear',
shine: 'shine var(--duration) infinite linear',
meteor: 'meteor 5s linear infinite',
'background-position-spin': 'background-position-spin 3000ms infinite alternate',
'shiny-text': 'shiny-text 8s infinite',
gradient: 'gradient 8s linear infinite',
rainbow: 'rainbow var(--speed, 2s) infinite linear',
'shimmer-slide': 'shimmer-slide var(--speed) ease-in-out infinite alternate',
'spin-around': 'spin-around calc(var(--speed) * 2) infinite linear',
pulse: 'pulse var(--duration) ease-out infinite',
grid: 'grid 15s linear infinite',
ripple: 'ripple var(--duration,2s) ease calc(var(--i, 0)*.2s) infinite'
}
}
},
safelist: [
{
pattern:
/^(bg-(?:slate|gray|zinc|neutral|stone|red|orange|amber|yellow|lime|green|emerald|teal|cyan|sky|blue|indigo|violet|purple|fuchsia|pink|rose)-(?:50|100|200|300|400|500|600|700|800|900|950))$/,
variants: ['hover', 'ui-selected']
},
{
pattern:
/^(text-(?:slate|gray|zinc|neutral|stone|red|orange|amber|yellow|lime|green|emerald|teal|cyan|sky|blue|indigo|violet|purple|fuchsia|pink|rose)-(?:50|100|200|300|400|500|600|700|800|900|950))$/,
variants: ['hover', 'ui-selected']
},
{
pattern:
/^(border-(?:slate|gray|zinc|neutral|stone|red|orange|amber|yellow|lime|green|emerald|teal|cyan|sky|blue|indigo|violet|purple|fuchsia|pink|rose)-(?:50|100|200|300|400|500|600|700|800|900|950))$/,
variants: ['hover', 'ui-selected']
},
{
pattern:
/^(ring-(?:slate|gray|zinc|neutral|stone|red|orange|amber|yellow|lime|green|emerald|teal|cyan|sky|blue|indigo|violet|purple|fuchsia|pink|rose)-(?:50|100|200|300|400|500|600|700|800|900|950))$/
},
{
pattern:
/^(stroke-(?:slate|gray|zinc|neutral|stone|red|orange|amber|yellow|lime|green|emerald|teal|cyan|sky|blue|indigo|violet|purple|fuchsia|pink|rose)-(?:50|100|200|300|400|500|600|700|800|900|950))$/
},
{
pattern:
/^(fill-(?:slate|gray|zinc|neutral|stone|red|orange|amber|yellow|lime|green|emerald|teal|cyan|sky|blue|indigo|violet|purple|fuchsia|pink|rose)-(?:50|100|200|300|400|500|600|700|800|900|950))$/
}
],
corePlugins: {
aspectRatio: false
},
plugins: [
require('./src/generated/js/tailwindcss-extend.mjs'),
require('preline/plugin'),
require('tailwindcss-animate'),
require('tailwindcss-motion'),
require('@tailwindcss/aspect-ratio'),
require('@tailwindcss/container-queries'),
require('@tailwindcss/forms'),
<img width="1726" alt="Screenshot 2024-11-17 at 18 45 25" src="https://github.com/user-attachments/assets/2eddd36b-d6e4-4df5-ad08-d09112160c7e">
require('@tailwindcss/typography'),
require('@headlessui/tailwindcss')
]
}
export default config
```
This file has no issues and works fine.
However, when I want to upgrade shadcn ui components with the command "npx shadcn@latest add -a -y -o", it cannot parse existing tailwind.config.ts and messes up the file.
See the attached screenshot.
### Affected component/components
All
### How to reproduce
1. Install all shadcn components with "npx shadcn@latest add -a -y -o"
2. Install tremor components. Instructions: https://tremor.so/docs/getting-started/installation/next
3. Run "npx shadcn@latest add -a -y -o" again
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
MacOS
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,665,722,615 | PowerToys | Symbols stuck in the top for a while | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce

### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Quick Accent | low | Minor |
2,665,805,641 | neovim | `:trust` puts incorrect(?) SHA-256 to trust database | ### Problem
`trust` command writes incorrect hash to trust database
### Steps to reproduce
```
echo "vim.o.exrc = true" > %LOCALAPPDATA%/nvim/init.lua
mkdir "C:\tmp\1.d"
cd "C:\tmp\1.d"
echo abc = 1 > .nvim.lua
```
open `nvim`, press `a` to trust `.nvim.lua` in current directory and exit NeoVim
```
cat %LOCALAPPDATA%/nvim-data/trust
65ccf3b8a852603a341b039e58b211d1311b6b7d20df7d904def399171aba85b C:\tmp\1.d\.nvim.lua
@rem C:\Program Files\Git\usr\bin\sha256sum.exe is used here
sha256sum -t .nvim.lua
ee550c37003c2b228311e0148c5edf8a6de34c669b48dd987fca06eebff1c489 .nvim.lua
```
open `nvim .nvim.lua`, add one empty line at the end of the file, execute `:w | trust | q`
```
cat %LOCALAPPDATA%/nvim-data/trust
9acde475de7c19b8d2725504270568a0c4a08e1243b2b60c9d7b6bd18956e636 C:\tmp\1.d\.nvim.lua
sha256sum -t .nvim.lua
9acde475de7c19b8d2725504270568a0c4a08e1243b2b60c9d7b6bd18956e636 .nvim.lua
```
(!) open `nvim .nvim.lua`, **it complains about file not being trusted**; press a(llow) and exit `nvim`.
```
cat %LOCALAPPDATA%/nvim-data/trust
84c32a397500f69ffe7e452fa0be8e019ad431466fa3e08f77f243abd216d083 C:\tmp\1.d\.nvim.lua
sha256sum -t .nvim.lua
9acde475de7c19b8d2725504270568a0c4a08e1243b2b60c9d7b6bd18956e636 .nvim.lua
```
### Expected behavior
After `trust` command is issued for `.nvim.lua`, next time it's loaded, it's already in trust database.
### Nvim version (nvim -v)
NVIM v0.10.2, Build type: Release, LuaJIT 2.1.1713484068
### Vim (not Nvim) behaves the same?
vim doesn't have `trust` feature
### Operating system/version
Windows 11 Home
### Terminal name/version
cmd.exe, Microsoft Windows [Version 10.0.22631.4460]
### $TERM environment variable
absent
### Installation
winget | bug,platform:windows,needs:repro,filesystem | low | Minor |
2,665,818,562 | react | [React 19] Prewarm with use() broken in certain state-change situations in the parent | ## Summary
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
When testing the new prewarming, I ran across a situation where an effect / state change in the parent seems to break prewarming. This only happens with `use()`, not with `throw promise`:
https://codesandbox.io/p/sandbox/sibling-suspense-use-reproduction-lmscnl
I have not explored exactly under which circumstances this happens.
I did not find this in a real app, it happened when I was testing edge cases with React Query and noticed `useSuspenseQuery` worked as expected (`throw promise`) and `{ promise } = useQuery(...); use(promise)` did not in one case.
I am not sure this is a bug or intentional behaviour, but wanted to file it since it behaves differently with `use(promise)` and `throw promise`. | Type: Bug,React 19 | medium | Critical |
2,665,821,681 | next.js | Build error with Next.js 15 in monorepo | ### Link to the code that reproduces this issue
https://github.com/omarshehab221/DoomUI
### To Reproduce
Just clone the repo and try to build the apps/template or the apps/playground (I made sure not change anything in this one from what `bun create next-app@latest` created to test the error)
### Current vs. Expected behavior
I was trying to build the apps/template without that much change from what `bun create next-app@latest` created. I expected it to build fine specially that it passed the optimized build and the linting phases. It throws this error in the Collecting pages data phase: "Error: Minified React error #31; visit https://reactjs.org/docs/error-decoder.html?invariant=31&args[]=object%20with%20keys%20%7B%24%24typeof%2C%20type%2C%20key%2C%20ref%2C%20props%7D for the full message or use the non-minified dev environment for full errors and additional helpful warnings."
I created apps/playground and made sure not to change any of its code just to be sure the issue exists and had the same problem.
**This usually doesn't happen when I'm not working in a monorepo**
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 16272
Available CPU cores: 4
Binaries:
Node: 18.18.0
npm: 10.5.0
Yarn: 1.22.22
pnpm: 9.9.0
bun: 1.1.33
Relevant Packages:
next: 15.0.3
eslint-config-next: 15.0.3
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Output (export/standalone), Turbopack
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
I created apps/playground and didn't change it's code just to test this | create-next-app,bug,Output (export/standalone),Turbopack | low | Critical |
2,665,829,153 | stable-diffusion-webui | resume_download deprecation | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
resume_download deprecation
### Steps to reproduce the problem
resume_download deprecation
### What should have happened?
resume_download deprecation
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
Platform: "Windows-10-10.0.19045-SP0",
Python: "3.10.6",
Version: "v1.10.1",
### Console logs
```Shell
(.venv) PS D:\AI\stable-diffusion-webui> .\webui-user.bat
venv "D:\AI\stable-diffusion-webui\.venv\Scripts\Python.exe"
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
Launching Web UI with arguments: --xformers
Loading weights [6ce0161689] from D:\AI\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\AI\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
D:\AI\stable-diffusion-webui\.venv\lib\site-packages\huggingface_hub\file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 41.4s (prepare environment: 23.2s, import torch: 8.0s, import gradio: 1.8s, setup paths: 1.9s, initialize shared: 0.8s, other imports: 1.0s, load scripts: 2.5s, create ui: 0.8s, gradio launch: 1.1s).
Applying attention optimization: xformers... done.
Model loaded in 8.9s (load weights from disk: 0.9s, create model: 0.9s, apply weights to model: 6.4s, apply half(): 0.1s, apply dtype to VAE: 0.2s, calculate empty prompt: 0.2s).
```
### Additional information
_No response_ | bug-report | low | Critical |
2,665,830,807 | PowerToys | dimming screen for dark environments | ### Description of the new feature / enhancement
the feature is to add a dimming filter on the screen, i mostly use the find my mouse feature (because it dims the screen and highlight the mouse) for this purpose but as soon as i move the mouse or click any button the overlay disappears so it would be cool to add it as a separate feature
### Scenario when this would be used?
it would be so useful at night or not so bright environments as the lowest brightness of laptop screens isn't always dim enough, so it causes eye strains in dark environments
### Supporting information
you might already have this feature partially done in the find my mouse one, so it won't be hard to implement | Needs-Triage | low | Minor |
2,665,889,270 | angular | Reattaching same Angular element causes `Shadow root cannot be created on a host which already hosts a shadow tree` | ### Which @angular/* package(s) are the source of the bug?
elements
### Is this a regression?
No
### Description
I'm trying to reuse the same custom Angular element I created. I created an element class with `createCustomElement()`, added it to the list of custom elements, created the element and rendered it with `appendChild`. The element rendered and worked fine.
Then I removed the element from the DOM and rendered it again in the same place (I didn't create a new element). Now the element rendered, but lost some styles and doesn't respond to events anymore (I just added show/hide button inside there for testing).
For the main component of the element I used `ViewEncapsulation.ShadowDom`. So when the element renders for the third time (not the second!) I get the error `Shadow root cannot be created on a host which already hosts a shadow tree.`. If I set the main component to `ViewEncapsulation.Emulate` (the default), there is no error, but the element still behaves incorrectly and loses styles.
It is clear why the error occurs only on the third render: this is due to the [mechanism](https://github.com/angular/angular/blob/92f30a749d676a290f5e173760ca29f0ff85ba8c/packages/elements/src/component-factory-strategy.ts#L138) that allows not to destroy the component if it was moved in the DOM (if `DESTROY_DELAY` did not pass between the destruction and the call).
It is also clear why the element does not work correctly after re-rendering. Angular Elements completely kills the component after it was removed from the DOM (with `DESTROY_DELAY` delay). This element can no longer be used, and its re-rendering (`connectedCallback` function) leads to errors.
I think it is strange that the element can no longer be used after it was removed from the DOM. Regular HTML elements don't behave like that. I can remove and place an element as many times as I want, and it will work correctly. If I try to place the same element twice, then by default it will be removed from the previous place and added to the new one.
I need to use the element after it is removed from the DOM, because it stores its current state (I make tabs in my application). Yes, I can bypass the removal from the DOM and use `hidden`, but I would not like to resort to such a crutch.
There are two buttons in the reproduction:
- Toggle web component — adds or removes a web component (appendChild, removeChild). It generates errors.
- Toggle — a button inside an Angular element, which is only needed to check that the component responds to events.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/thz4y7
### Please provide the exception or error you saw
```true
Uncaught NotSupportedError: Failed to execute 'attachShadow' on 'Element': Shadow root cannot be created on a host which already hosts a shadow tree.
at new ShadowDomRenderer (main.js:27398:30)
at _DomRendererFactory2.getOrCreateRenderer (main.js:27187:18)
at _DomRendererFactory2.createRenderer (main.js:27164:27)
at createRootComponentView (main.js:11226:52)
at ComponentFactory.create (main.js:11141:31)
at ComponentNgElementStrategy.initializeComponent (main.js:21773:47)
at main.js:21706:14
at _ZoneDelegate.invoke (polyfills.js:300:158)
at Object.onInvoke (main.js:10580:25)
at _ZoneDelegate.invoke (polyfills.js:300:46)
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.1.3
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 18.1.2
... animations, cdk, common, compiler, compiler-cli, core, forms
... material, platform-browser
Package Version
------------------------------------------------------
@angular-devkit/architect 0.1801.3
@angular-devkit/core 18.1.3
@angular-devkit/schematics 18.1.3
@angular/build 18.1.3
@angular/cli 18.1.3
@angular/elements 18.0.0
@schematics/angular 18.1.3
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.8
```
### Anything else?
_No response_ | area: elements | low | Critical |
2,665,894,555 | PowerToys | Allow cascade menu on New+ context menu | ### Description of the new feature / enhancement
HI...
New+ and their template function are amazing, but in some cases it's necessary to categorize these templates.
In the current format I use a prefix for each new template category, but if there was a way to create a second level of menu it would help a lot.
Example:
Current Format:
New+
-- Development - Java
-- Development - C
-- Development - Python
-- Video - Personal
-- Video - Social Media
-- Video - Marketing
Suggested Format:
-- Development
---- Java
---- C
---- Python
-- Videos
---- Personal
---- Social Media
---- Marketing
### Scenario when this would be used?
In prolonged use of the tool, many templates will be created, and localizing them will end up becoming increasingly time-consuming and difficult.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,665,911,403 | pytorch | Grabbing dtype from NamedTuple vs Tuple for cast causes graph breaks "Unexpected type in sourceless builder." | ### 🐛 Describe the bug
It took me a while to make the reproduction for this one, since I really have no idea what is going on. It seems, that using a NamedTuple instead of a Tuple and casting to a dtype grabbed from that NamedTuple causes a graph break.
The interesting part is it seems to graph break on the part where tensor.custom_dtype is accessed, rather than when it is passed to the CustomTensor constructor. So if tensor.custom_dtype was put on a separate line in create_custom_tensor, it would error on that line.
reproducer (2 graph breaks):
```python
import torch
from collections import namedtuple
CustomDtype = namedtuple("CustomDtype", ["dtype", "higher_dtype"])
class CustomTensor(torch.Tensor):
_data: torch.Tensor
custom_dtype: CustomDtype
__torch_function__ = torch._C._disabled_torch_function_impl
__slots__ = [
"_data",
"custom_dtype",
]
def __new__(
cls,
data: torch.Tensor,
custom_dtype: CustomDtype,
):
self = torch.Tensor._make_wrapper_subclass(
cls,
data.size(),
strides=data.stride(),
storage_offset=data.storage_offset(),
dtype=custom_dtype.dtype,
layout=data.layout,
requires_grad=data.requires_grad,
device=data.device,
)
self._data = data
self.custom_dtype = custom_dtype
return self
def __tensor_flatten__(self):
meta = {
"custom_dtype": self.custom_dtype,
}
return ["_data"], meta
@staticmethod
def __tensor_unflatten__(inner_tensors: dict, metadata, outer_size, outer_stride):
return CustomTensor(
inner_tensors["_data"],
metadata["custom_dtype"],
)
@classmethod
def __torch_dispatch__(cls, func, types, args=(), kwargs={}):
return func(*args, **kwargs)
def maybe_cast_up(tensor):
return CustomTensor(tensor._data.to(tensor.custom_dtype.higher_dtype), tensor.custom_dtype)
def maybe_cast_down(tensor):
return CustomTensor(tensor._data.to(tensor.custom_dtype.dtype), tensor.custom_dtype)
def create_custom_tensor(tensor):
return CustomTensor(tensor._data, tensor.custom_dtype)
@torch.compile
def create_custom_tensor_cast(tensor):
tensor = maybe_cast_up(tensor)
ret = create_custom_tensor(tensor)
return maybe_cast_down(ret)
print(torch._dynamo.explain(create_custom_tensor_cast)(CustomTensor(torch.randn(1000, dtype=torch.float16), CustomDtype(torch.float16, torch.float32))))
```
working version (0 graph breaks):
```python
import torch
class CustomTensor(torch.Tensor):
_data: torch.Tensor
custom_dtype: tuple
__torch_function__ = torch._C._disabled_torch_function_impl
__slots__ = [
"_data",
"custom_dtype",
]
def __new__(
cls,
data: torch.Tensor,
custom_dtype: tuple,
):
self = torch.Tensor._make_wrapper_subclass(
cls,
data.size(),
strides=data.stride(),
storage_offset=data.storage_offset(),
dtype=custom_dtype[0],
layout=data.layout,
requires_grad=data.requires_grad,
device=data.device,
)
self._data = data
self.custom_dtype = custom_dtype
return self
def __tensor_flatten__(self):
meta = {
"custom_dtype": self.custom_dtype,
}
return ["_data"], meta
@staticmethod
def __tensor_unflatten__(inner_tensors: dict, metadata, outer_size, outer_stride):
return CustomTensor(
inner_tensors["_data"],
metadata["custom_dtype"],
)
@classmethod
def __torch_dispatch__(cls, func, types, args=(), kwargs={}):
return func(*args, **kwargs)
def maybe_cast_up(tensor):
return CustomTensor(tensor._data.to(tensor.custom_dtype[1]), tensor.custom_dtype)
def maybe_cast_down(tensor):
return CustomTensor(tensor._data.to(tensor.custom_dtype[0]), tensor.custom_dtype)
def create_custom_tensor(tensor):
return CustomTensor(tensor._data, tensor.custom_dtype)
@torch.compile
def create_custom_tensor_cast(tensor):
tensor = maybe_cast_up(tensor)
ret = create_custom_tensor(tensor)
return maybe_cast_down(ret)
print(torch._dynamo.explain(create_custom_tensor_cast)(CustomTensor(torch.randn(1000, dtype=torch.float16), (torch.float16, torch.float32))))
```
tlparse log:
[dedicated_log_torch_trace_9b3b9v03.log](https://github.com/user-attachments/files/17790752/dedicated_log_torch_trace_9b3b9v03.log)
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.30.3
Libc version: glibc-2.40
Python version: 3.11.10 (main, Sep 9 2024, 22:11:19) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.6.43-273-tkg-bore-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX 6000 Ada Generation
GPU 1: NVIDIA RTX 6000 Ada Generation
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 21%
CPU max MHz: 5300.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.5.1
[pip3] torch-xla==2.5.0
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] No relevant packages
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @ezyang | triaged,oncall: pt2,module: dynamo,module: graph breaks | low | Critical |
2,666,046,007 | vscode | "Accept all changes from .." button in merge editor duplicates non-conflicting changes | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: This requires the git extension which, I think, is enabled by default.
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
```sh
$ code --version
1.95.3
f1a4fb101478ce6ec82fe9627c43efbf9e98c813
x64
```
- OS Version: `6.11.3-1-siduction-amd64`
Steps to Reproduce:
1. Extract the reproducing example in [vscodebug.tar.gz](https://github.com/user-attachments/files/17791046/vscodebug.tar.gz)
2. The archive contains one git repository called `vscodebug` and the checked out branch should be `feature`. Check out that branch if this is not the case.
3. Open `vscodebug/` in vscode and trust the contents to enable git functionality.
4. Instruct vscode to rebase the current branch (`feature`) onto `main`.
5. There should be a single conflict in `myfile`.
6. Open `myfile` (without using the merge editor) to observe one (and not two) conflicting changes at the beginning of the file: 2x `preamble` versus 1x `preamble`.
7. Now open `myfile` in the merge editor and observe that it still only counts a single conflict (top right status bar of the bottom file view) but also that it highlights the non-conflicting addition of "new text" in a similar style to that of the actual conflict.
8. Click "Accept all changes from Right".
9. Observe the addition of "new text" being duplicated despite it being exactly the same in both branches and despite it not even being counted as a conflict by vscode itself.
This behavior is highly surprising. I don't see any use for a button that performs the action that vscode performs here. Git users would likely be familiar with `--ours` or `--theirs` but those are exclusive, not additive. I propose that the button be replaced by "Resolve all conflicts with changes from Right" which would leave alone those parts that vscode correctly identified as non-conflicting.
There is an additional, related behavior of the merge editor that I have not been able to generate a minimal reproducing example for: it is sometimes the case that in situations such as in `myfile` where `preamble` was accidentally added twice in the rebase's target branch and only once in the current branch, clicking "Accept all changes from Right" (i.e. the single addition) actually generates two additions of the line. It seems to me that vscode tries to do something smart by finding the commonalities between the branches (a single addition) but when the button is pressed it interprets it as the user's which to add the line again even though it apparently already part of the code that vscode managed to merge all by itself. I was unable to reproduce this in my example so I cannot demonstrate this easily. | bug,merge-editor | low | Critical |
2,666,060,080 | go | cmd/go: TestScript/test_json_build failures | ```
#!watchflakes
default <- pkg == "cmd/go" && test == "TestScript/test_json_build"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731013230249521921)):
=== RUN TestScript/test_json_build
=== PAUSE TestScript/test_json_build
=== CONT TestScript/test_json_build
script_test.go:139: 2024-11-17T15:00:49Z
script_test.go:141: $WORK=C:\b\s\w\ir\x\t\cmd-go-test-413761077\tmpdir2971279244\test_json_build1653328468
script_test.go:163:
PATH=C:\b\s\w\ir\x\t\cmd-go-test-413761077\tmpdir2971279244\testbin;C:\b\s\w\ir\x\w\goroot\bin;C:\b\s\w\ir\x\w\goroot\bin;C:\b\s\w\ir\x\w\goroot\bin;C:\b\s\w\ir\cache\tools\bin;C:\b\s\w\ir\bbagent_utility_packages;C:\b\s\w\ir\bbagent_utility_packages\bin;C:\b\s\w\ir\cipd_bin_packages;C:\b\s\w\ir\cipd_bin_packages\bin;C:\b\s\w\ir\cipd_bin_packages\cpython3;C:\b\s\w\ir\cipd_bin_packages\cpython3\bin;C:\b\s\w\ir\cache\cipd_client;C:\b\s\w\ir\cache\cipd_client\bin;C:\b\s\cipd_cache\bin;C:\Program Files\OpenSSH\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\Puppet Labs\Puppet\bin;C:\b\s\w\ir\cache\tools\cc\windows\gcc64\bin
USERPROFILE=/no-home
CCACHE_DISABLE=1
GOARCH=amd64
...
{"ImportPath":"m/builderror [m/builderror.test]","Action":"build-fail"}
{"Time":"2024-11-17T07:00:51.3907289-08:00","Action":"start","Package":"m/builderror"}
{"Time":"2024-11-17T07:00:51.3907289-08:00","Action":"output","Package":"m/builderror","Output":"FAIL\tm/builderror [build failed]\n"}
{"Time":"2024-11-17T07:00:51.3907289-08:00","Action":"fail","Package":"m/builderror","Elapsed":0,"FailedBuild":"m/builderror [m/builderror.test]"}
[exit status 1]
> stdout '"ImportPath":"m/builderror \[m/builderror\.test\]","Action":"build-output","Output":"# m/builderror \[m/builderror.test\]\\n"'
matched: {"ImportPath":"m/builderror [m/builderror.test]","Action":"build-output","Output":"# m/builderror [m/builderror.test]\n"}
> stdout '"ImportPath":"m/builderror \[m/builderror\.test\]","Action":"build-output","Output":"builderror/main_test.go:3:11: undefined: y\\n"'
script_test.go:163: FAIL: testdata\script\test_json_build.txt:6: stdout '"ImportPath":"m/builderror \[m/builderror\.test\]","Action":"build-output","Output":"builderror/main_test.go:3:11: undefined: y\\n"': no match for `(?m)"ImportPath":"m/builderror \[m/builderror\.test\]","Action":"build-output","Output":"builderror/main_test.go:3:11: undefined: y\\n"` in stdout
--- FAIL: TestScript/test_json_build (2.75s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,666,062,894 | godot | Script Editor: select next occurrence can't handle newlines | ### Tested versions
4.2.2, 4.3
### System information
macOS 12.7
### Issue description
`ui_text_add_selection_for_next_occurrence` throws an error when the selection includes a newline
<img width="155" alt="image" src="https://github.com/user-attachments/assets/286a0420-58df-4239-9b7c-97be0ff6e77e">
```
scene/gui/text_edit.cpp:4145 - Index p_from_column = 13 is out of bounds (text[p_from_line].length() + 1 = 13).
```
### Steps to reproduce
select something with a linebreak in the editor, then press the shortcut.
just `\n` -> error
`\n something` -> error
`something \n` -> no error, but also doesn't work
### Minimal reproduction project (MRP)
empty project | bug,topic:editor,topic:gui | low | Critical |
2,666,079,563 | langchain | ChatGoogleGenerativeAI doesn't support two system messages. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.prebuilt import create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash",)
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful agent. To perform multiplications use the multiply tool."),
("user", "Hi, my name is bob"),
("system", "hi Bob, nice to meet you."),
MessagesPlaceholder(variable_name="messages"),
]
)
def format_for_model(state: AgentState):
return prompt.invoke({"messages": state["messages"]})
graph = create_react_agent(llm, tools=[multiply], state_modifier=format_for_model)
inputs = {"messages": [("user", "What's my name?")]}
for s in graph.stream(inputs, stream_mode="values"):
message = s["messages"][-1]
if isinstance(message, tuple):
print(message)
else:
message.pretty_print()
```
### Error Message and Stack Trace (if applicable)
```shell
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[13], line 1
----> 1 for s in graph.stream(inputs, stream_mode="values"):
2 message = s["messages"][-1]
3 if isinstance(message, tuple):
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\pregel\__init__.py:1328, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
1317 # Similarly to Bulk Synchronous Parallel / Pregel model
1318 # computation proceeds in steps, while there are channel updates
1319 # channel updates from step N are only visible in step N+1
1320 # channels are guaranteed to be immutable for the duration of the step,
1321 # with channel updates applied only at the transition between steps
1322 while loop.tick(
1323 input_keys=self.input_channels,
1324 interrupt_before=interrupt_before_,
1325 interrupt_after=interrupt_after_,
1326 manager=run_manager,
1327 ):
-> 1328 for _ in runner.tick(
1329 loop.tasks.values(),
1330 timeout=self.step_timeout,
1331 retry_policy=self.retry_policy,
1332 get_waiter=get_waiter,
1333 ):
1334 # emit output
1335 yield from output()
1336 # emit output
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\pregel\runner.py:58, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
56 t = tasks[0]
57 try:
---> 58 run_with_retry(t, retry_policy)
59 self.commit(t, None)
60 except Exception as exc:
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\pregel\retry.py:29, in run_with_retry(task, retry_policy)
27 task.writes.clear()
28 # run the task
---> 29 task.proc.invoke(task.input, config)
30 # if successful, end
31 break
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\utils\runnable.py:410, in RunnableSeq.invoke(self, input, config, **kwargs)
408 context.run(_set_config_context, config)
409 if i == 0:
--> 410 input = context.run(step.invoke, input, config, **kwargs)
411 else:
412 input = context.run(step.invoke, input, config)
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\utils\runnable.py:176, in RunnableCallable.invoke(self, input, config, **kwargs)
174 context = copy_context()
175 context.run(_set_config_context, child_config)
--> 176 ret = context.run(self.func, input, **kwargs)
177 except BaseException as e:
178 run_manager.on_chain_error(e)
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\prebuilt\chat_agent_executor.py:566, in create_react_agent.<locals>.call_model(state, config)
564 def call_model(state: AgentState, config: RunnableConfig) -> AgentState:
565 _validate_chat_history(state["messages"])
--> 566 response = model_runnable.invoke(state, config)
567 has_tool_calls = isinstance(response, AIMessage) and response.tool_calls
568 all_tools_return_direct = (
569 all(call["name"] in should_return_direct for call in response.tool_calls)
570 if isinstance(response, AIMessage)
571 else False
572 )
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\runnables\base.py:3024, in RunnableSequence.invoke(self, input, config, **kwargs)
3022 input = context.run(step.invoke, input, config, **kwargs)
3023 else:
-> 3024 input = context.run(step.invoke, input, config)
3025 # finish the root run
3026 except BaseException as e:
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\runnables\base.py:5354, in RunnableBindingBase.invoke(self, input, config, **kwargs)
5348 def invoke(
5349 self,
5350 input: Input,
5351 config: Optional[RunnableConfig] = None,
5352 **kwargs: Optional[Any],
5353 ) -> Output:
-> 5354 return self.bound.invoke(
5355 input,
5356 self._merge_configs(config),
5357 **{**self.kwargs, **kwargs},
5358 )
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
275 def invoke(
276 self,
277 input: LanguageModelInput,
(...)
281 **kwargs: Any,
282 ) -> BaseMessage:
283 config = ensure_config(config)
284 return cast(
285 ChatGeneration,
--> 286 self.generate_prompt(
287 [self._convert_input(input)],
288 stop=stop,
289 callbacks=config.get("callbacks"),
290 tags=config.get("tags"),
291 metadata=config.get("metadata"),
292 run_name=config.get("run_name"),
293 run_id=config.pop("run_id", None),
294 **kwargs,
295 ).generations[0][0],
296 ).message
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:786, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
778 def generate_prompt(
779 self,
780 prompts: list[PromptValue],
(...)
783 **kwargs: Any,
784 ) -> LLMResult:
785 prompt_messages = [p.to_messages() for p in prompts]
--> 786 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:643, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
641 if run_managers:
642 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643 raise e
644 flattened_outputs = [
645 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
646 for res in results
647 ]
648 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:633, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
630 for i, m in enumerate(messages):
631 try:
632 results.append(
--> 633 self._generate_with_cache(
634 m,
635 stop=stop,
636 run_manager=run_managers[i] if run_managers else None,
637 **kwargs,
638 )
639 )
640 except BaseException as e:
641 if run_managers:
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:851, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
849 else:
850 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 851 result = self._generate(
852 messages, stop=stop, run_manager=run_manager, **kwargs
853 )
854 else:
855 result = self._generate(messages, stop=stop, **kwargs)
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_google_genai\chat_models.py:978, in ChatGoogleGenerativeAI._generate(self, messages, stop, run_manager, tools, functions, safety_settings, tool_config, generation_config, cached_content, tool_choice, **kwargs)
963 def _generate(
964 self,
965 messages: List[BaseMessage],
(...)
976 **kwargs: Any,
977 ) -> ChatResult:
--> 978 request = self._prepare_request(
979 messages,
980 stop=stop,
981 tools=tools,
982 functions=functions,
983 safety_settings=safety_settings,
984 tool_config=tool_config,
985 generation_config=generation_config,
986 cached_content=cached_content or self.cached_content,
987 tool_choice=tool_choice,
988 )
989 response: GenerateContentResponse = _chat_with_retry(
990 request=request,
991 **kwargs,
992 generation_method=self.client.generate_content,
993 metadata=self.default_metadata,
994 )
995 return _response_to_result(response)
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_google_genai\chat_models.py:1208, in ChatGoogleGenerativeAI._prepare_request(self, messages, stop, tools, functions, safety_settings, tool_config, tool_choice, generation_config, cached_content)
1205 elif functions:
1206 formatted_tools = [convert_to_genai_function_declarations(functions)]
-> 1208 system_instruction, history = _parse_chat_history(
1209 messages,
1210 convert_system_message_to_human=self.convert_system_message_to_human,
1211 )
1212 if tool_choice:
1213 if not formatted_tools:
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_google_genai\chat_models.py:445, in _parse_chat_history(input_messages, convert_system_message_to_human)
432 parts = [
433 Part(
434 function_response=FunctionResponse(
(...)
442 )
443 ]
444 else:
--> 445 raise ValueError(
446 f"Unexpected message with type {type(message)} at the position {i}."
447 )
449 messages.append(Content(role=role, parts=parts))
450 return system_instruction, messages
ValueError: Unexpected message with type <class 'langchain_core.messages.system.SystemMessage'> at the position 2.
```
### Description
When running `LangGraph` agents where the agent's initial set of instructions include two system messages, and the LLM is `Gemini` (eg. `ChatGoogleGenerativeAI`), we get a `ValueError`:
> ValueError: Unexpected message with type <class 'langchain_core.messages.system.SystemMessage'> at the position 2.
This behavior doesn't occur if `ChatOpenAI` is used instead of `ChatGoogleGenerativeAI`.
My example is quite basic, but the same problem occurs in more realistic usecases such as the [Multi-agent supervisor](https://langchain-ai.github.io/langgraph/tutorials/multi_agent/agent_supervisor/) tutorial.
There the prompt template is of this form:
```python
members = ["Researcher", "Coder"]
system_prompt = (
"You are a supervisor tasked with managing a conversation between the"
" following workers: {members}. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH."
)
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
(
"system",
"Given the conversation above, who should act next?"
" Or should we FINISH? Select one of: {options}",
),
]
).partial(options=str(options), members=", ".join(members))
```
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.7
> langchain_community: 0.3.5
> langsmith: 0.1.139
> langchain_chroma: 0.1.4
> langchain_experimental: 0.3.3
> langchain_google_genai: 2.0.4
> langchain_openai: 0.2.5
> langchain_text_splitters: 0.3.2
> langgraph: 0.2.44
> langserve: 0.3.0
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> chromadb: 0.5.17
> dataclasses-json: 0.6.7
> fastapi: 0.115.4
> google-generativeai: 0.8.3
> httpx: 0.27.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.2
> langgraph-sdk: 0.1.35
> numpy: 1.26.4
> openai: 1.53.1
> orjson: 3.10.11
> packaging: 24.1
> pillow: Installed. No version info available.
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> sse-starlette: 1.8.2
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.11.0 | 🤖:bug | low | Critical |
2,666,095,616 | godot | Internal Script Error: Opcode :0 | ### Tested versions
currently only seen in v4.2.2
### System information
Windows 10- Godot _v4.2.2-stable
### Issue description
The error happened at runtime. The output was this : internal script error, Opcode: 0(please report)
I guess this must be some internal engine bug.It happened to Timer created in the scene tree using the await keyword which was being run in the physics_process function.
### Steps to reproduce
func _shoot():
if can_shoot:
can_shoot = false
var b = turret.ammo_type.instantiate()
get_tree().root.add_child(b)
b.global_position = muzzle.global_position
b.global_rotation = muzzle.global_rotation - (PI/2)
**await get_tree().create_timer(0.5).timeout** // Error Line
can_shoot = true
### Minimal reproduction project (MRP)
I dont know | bug,topic:gdscript | low | Critical |
2,666,178,482 | next.js | Error: Could not find the module ... in the React Client Manifest. This is probably a bug in the React Server Components bundler. | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/keen-swartz-k22cmx?file=%2Fapp%2Fpage.js%3A9%2C34
### To Reproduce
# Dublicate of https://github.com/vercel/next.js/issues/61046
### This issue still open!
### Current vs. Expected behavior

So if I put `not-found.tsx` in `app` folder then I see
```
Error: Could not find the module "/home/kali/Documents/GitHub/outreach-tool/app/not-found.tsx#" in the React Client Manifest. This is probably a bug in the React Server Components bundler.
```
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Kali 6.6.15-2kali1 (2024-05-17)
Binaries:
Node: 20.0.0
npm: 9.6.4
Yarn: 1.22.22
pnpm: 9.5.0
Relevant Packages:
next: 14.0.3
eslint-config-next: 15.0.3
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Navigation | low | Critical |
2,666,235,145 | go | x/tools/go/analysis/unitchecker: TestVetStdlib failures | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/go/analysis/unitchecker" && test == "TestVetStdlib"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731155300376311825)):
=== RUN TestVetStdlib
vet_std_test.go:101: go vet std failed (exit status 1):
# crypto/internal/fips/check
# [crypto/internal/fips/check]
../../../../goroot/src/crypto/internal/fips/check/check.go:89:3: unreachable code
--- FAIL: TestVetStdlib (319.73s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsFix,Tools | medium | Critical |
2,666,253,240 | pytorch | [2.5.0+] Running a compiled model under FlopCounterMode regresses its performance | ### 🐛 Describe the bug
Invoking a compiled model under FlopCounterMode context results in a slower compiled model.
If we run our benchmark _before_ the model is instrumented with FlopCounterMode, then `ms_per_iter` records a decent latency, benefitting from compilation.
If we run our benchmark _after_ the model is instrumented with FlopCounterMode, then `ms_per_iter` records a slower time, equal to a non-compiled model.
Here's a pseudocode illustration of what I mean:
```diff
from typing import Callable
from torch.utils.flop_counter import FlopCounterMode
from triton.testing import do_bench
def get_flops(f: Callable[[], None]) -> None:
+ ms_per_iter: float = do_bench(f)
flop_counter = FlopCounterMode(display=True)
with flop_counter:
f()
- ms_per_iter: float = do_bench(f)
flop_count: int = flop_counter.get_total_flops()
iters_per_second = 1e3/ms_per_iter
flops: float = iters_per_second * flop_count
print(f"{flops / 1e12} TF/s")
model: Callable[[LongTensor, FloatTensor]
model_c = torch.compile(model)
input_ids = torch.ones(8, 512, dtype=torch.long, device='cuda')
get_flops(lambda: model_c(input_ids)
```
I shared this finding last month on Twitter:
https://x.com/Birchlabs/status/1847369302976188819
I noticed this problem when I upgraded from torch 2.4.x to 2.5.0.
As in, I had to modify my script (benchmark first, count flops later) to get fast FLOP/s on 2.5.0.
I've created a repro:
https://gist.github.com/Birch-san/b661d5e6812559280438a43ae4ff89ff
Fast mode:
```bash
python -m scripts.bench_repro --ckpt xxl
```
```
benchmarking SDPA...
Module FLOP % Total
------------------------------------------- -------- ---------
SDPAAttn 365.072B 100.00%
- aten.mm 343.597B 94.12%
- aten._scaled_dot_product_cudnn_attention 21.475B 5.88%
SDPAAttn.o_proj 85.899B 23.53%
- aten.mm 85.899B 23.53%
SDPAAttn.qkv_proj 257.698B 70.59%
- aten.mm 257.698B 70.59%
370.9 TFLOP/s
benchmarking SDPA (compiled)...
Module FLOP % Total
-------------------------------------------- -------- ---------
OptimizedModule 365.072B 100.00%
- aten.mm 343.597B 94.12%
- aten._scaled_dot_product_cudnn_attention 21.475B 5.88%
OptimizedModule._orig_mod 365.072B 100.00%
- aten.mm 343.597B 94.12%
- aten._scaled_dot_product_cudnn_attention 21.475B 5.88%
450.8 TFLOP/s
```
Slow mode (reproduces torch 2.5.0+ bug):
```bash
python -m scripts.bench_repro --ckpt xxl --count-flops-early
```
```
benchmarking SDPA...
Module FLOP % Total
------------------------------------------- -------- ---------
SDPAAttn 365.072B 100.00%
- aten.mm 343.597B 94.12%
- aten._scaled_dot_product_cudnn_attention 21.475B 5.88%
SDPAAttn.o_proj 85.899B 23.53%
- aten.mm 85.899B 23.53%
SDPAAttn.qkv_proj 257.698B 70.59%
- aten.mm 257.698B 70.59%
371.6 TFLOP/s
benchmarking SDPA (compiled)...
Module FLOP % Total
-------------------------------------------- -------- ---------
OptimizedModule 365.072B 100.00%
- aten.mm 343.597B 94.12%
- aten._scaled_dot_product_cudnn_attention 21.475B 5.88%
OptimizedModule._orig_mod 365.072B 100.00%
- aten.mm 343.597B 94.12%
- aten._scaled_dot_product_cudnn_attention 21.475B 5.88%
372.2 TFLOP/s
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.2.dev0+torch250cu126
[pip3] numpy==1.26.4
[pip3] open_clip_torch==2.29.0
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchdiffeq==0.2.4
[pip3] torchmetrics==1.5.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @Chillee @ezyang @zou3519 @albanD @samdow @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,module: __torch_dispatch__,oncall: pt2,module: dynamo | low | Critical |
2,666,254,702 | kubernetes | kube-proxy nftables test are flaky | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-network-kind#sig-network-kind,%20nftables,%20master
https://testgrid.k8s.io/sig-network-kind#sig-network-kind,%20nftables,%20IPv6,%20master
### Which tests are flaking?
Seems to impact test randomly
### Since when has it been flaking?
15-11-2024
### Testgrid link
_No response_
### Reason for failure (if possible)
Checking at https://storage.googleapis.com/kubernetes-ci-logs/logs/ci-kubernetes-kind-network-nftables/1858001043166597120/artifacts/kind-worker/pods/kube-system_kube-proxy-tbpmz_fdcd393e-47df-4afe-a88e-27eaa918f570/kube-proxy/0.log
it seems there is some contention on the system
```
2024-11-17T04:51:06.117022547Z stderr F E1117 04:51:06.116486 1 proxier.go:1210] "Unable to delete stale chains; will retry later" err=<
2024-11-17T04:51:06.117050539Z stderr F /dev/stdin:2:28-104: Error: Could not process rule: Device or resource busy
2024-11-17T04:51:06.117056524Z stderr F delete chain ip kube-proxy endpoint-LR2XJHKW-services-4884/service-proxy-toggled/tcp/__10.244.1.180/9376
2024-11-17T04:51:06.117062922Z stderr F ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-17T04:51:06.117070369Z stderr F /dev/stdin:51:28-109: Error: Could not process rule: Device or resource busy
2024-11-17T04:51:06.117075351Z stderr F delete chain ip kube-proxy endpoint-23XNSDHO-nettest-2983/session-affinity-service/udp/udp__10.244.1.119/8081
2024-11-17T04:51:06.117079767Z stderr F ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-17T04:51:06.117084441Z stderr F /dev/stdin:57:28-108: Error: Could not process rule: Device or resource busy
2024-11-17T04:51:06.117088698Z stderr F delete chain ip kube-proxy endpoint-CLRXJGC4-services-8162/affinity-clusterip-timeout/tcp/__10.244.1.97/9376
2024-11-17T04:51:06.117092772Z stderr F ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-17T04:51:06.117096629Z stderr F /dev/stdin:58:28-104: Error: Could not process rule: Device or resource busy
2024-11-17T04:51:06.117100766Z stderr F delete chain ip kube-proxy endpoint-O34ANCIM-services-4884/service-proxy-toggled/tcp/__10.244.1.181/9376
2024-11-17T04:51:06.117122875Z stderr F
```
it seems to be present in multiple jobs https://storage.googleapis.com/kubernetes-ci-logs/logs/ci-kubernetes-kind-network-nftables/1857638650775343104/artifacts/kind-worker/pods/kube-system_kube-proxy-mcfxw_41e1cb46-c2e5-440c-8174-246253f0def2/kube-proxy/0.log , most probably on some of them reconciling solves the problem
```
2024-11-16T04:31:53.06971039Z stderr F I1116 04:31:53.068065 1 proxier.go:1174] "Syncing nftables rules" ipFamily="IPv4"
2024-11-16T04:31:53.069715193Z stderr F I1116 04:31:53.068092 1 proxier.go:1204] "Deleting stale nftables chains" ipFamily="IPv4" numChains=4
2024-11-16T04:31:53.126729378Z stderr F E1116 04:31:53.125738 1 proxier.go:1210] "Unable to delete stale chains; will retry later" err=<
2024-11-16T04:31:53.12702478Z stderr F /dev/stdin:2:28-100: Error: Could not process rule: Device or resource busy
2024-11-16T04:31:53.12703517Z stderr F delete chain ip kube-proxy endpoint-YHMEV5IX-services-5210/externalip-test/tcp/http__10.244.1.6/9376
2024-11-16T04:31:53.127040928Z stderr F ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-16T04:31:53.127046499Z stderr F > ipFamily="IPv4"
```
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig network | priority/important-soon,sig/network,kind/flake,triage/accepted | medium | Critical |
2,666,311,547 | go | build: build failure on gotip-linux-riscv64 | ```
#!watchflakes
default <- builder == "gotip-linux-riscv64" && repo == "go" && mode == "build"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731005195353882721)):
[I2024-11-17T17:51:33.134407Z 170175 0 sink.go:277] SinkServer: warm-up started
[I2024-11-17T17:51:33.134553Z 170175 0 sink.go:350] SinkServer: starting HTTP server...
[I2024-11-17T17:51:33.141372Z 170175 0 sink.go:282] SinkServer: warm-up ended
[I2024-11-17T17:51:33.142298Z 170175 0 cmd_stream.go:492] rdb-stream: starting the test command - ["/home/swarming/.swarming/w/ir/cache/tools/bin/result_adapter" "go" "-v=false" "-dump-json" "/home/swarming/.swarming/w/ir/x/w/dist.testjson" "--" "/home/swarming/.swarming/w/ir/x/w/goroot/bin/go" "tool" "dist" "test" "-json"]
2024/11/17 18:35:37 Failed: exit status 1
go tool dist: FAILED
ok archive/tar 1.348s
ok archive/zip 3.709s
ok bufio 0.529s
ok bytes 2.022s
...
[W2024-11-17T18:36:35.705544Z 170175 0 cmd_stream.go:504] rdb-stream: test process exited with error: exit status 123
[I2024-11-17T18:36:35.705749Z 170175 0 cmd_stream.go:488] rdb-stream: the test process terminated
[I2024-11-17T18:36:35.706118Z 170175 0 sink.go:375] SinkServer: shutdown started
[I2024-11-17T18:36:35.706381Z 170175 0 sink.go:353] SinkServer: HTTP server stopped with "http: Server closed"
[I2024-11-17T18:36:35.706464Z 170175 0 sink_server.go:96] SinkServer: draining TestResult channel started
[I2024-11-17T18:36:35.706558Z 170175 0 sink_server.go:98] SinkServer: draining TestResult channel ended
[I2024-11-17T18:36:35.706609Z 170175 0 sink_server.go:100] SinkServer: draining Artifact channel started
[I2024-11-17T18:36:35.706737Z 170175 0 sink_server.go:102] SinkServer: draining Artifact channel ended
[I2024-11-17T18:36:35.706810Z 170175 0 sink.go:378] SinkServer: shutdown completed successfully
[I2024-11-17T18:36:35.706903Z 170175 0 cmd_stream.go:420] rdb-stream: exiting with 123
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,666,322,707 | next.js | Route "/" used `crypto.randomUUID()` outside of `"use cache"` and without explicitly calling `await connection()` beforehand. | ### Link to the code that reproduces this issue
https://github.com/TheCukitoDev/blog
### To Reproduce
1. Fill the env variables
2. Run pnpm run dev:turbo
3. Open localhost:3000
### Current vs. Expected behavior
I think this shouldn't happen because i don't use any crypto.randomUUID() in my project or at least in my code i don't know if it happens in the node_modules but i think yes.
It should work well. I only get some Cosmos DB data...
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 16088
Available CPU cores: 16
Binaries:
Node: 20.15.1
npm: 10.7.0
Yarn: N/A
pnpm: 9.12.3
Relevant Packages:
next: 15.0.4-canary.14 // There is a newer canary version (15.0.4-canary.15) available, please upgrade!
eslint-config-next: 15.0.3
react: 19.0.0-rc-380f5d67-20241113
react-dom: 19.0.0-rc-380f5d67-20241113
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I tested it in Next.js 15.0.4-canary.14 (Turbopack) ,Next.js 15.0.4-canary.14 (Webpack) ,Next.js 15.0.4-canary.15 (Turbopack) and Next.js 15.0.4-canary.15 (Webpack) and it is the same error in the turbopack ones and the same one in the webpack ones but different between the turbopack ones and the webpack ones. I think the error comes from sentry because i don't receive the errors in sentry but it worked before so I don't know what happened | bug,dynamicIO | low | Critical |
2,666,393,137 | PowerToys | ImageResizer - Option to Select Encoder when Resizing | ### Description of the new feature / enhancement
What it be possible to add an option to select a decoder to be used? For example, when resizing an iOS .HEIC file, I would like to reformat it to .JPEG. This would be an option beyond just a fallback. Perhaps the presents also have an encoder option.
### Scenario when this would be used?
There are times when products and services do not support .HEIC, so I have to create a PNG or JPEG prior to uploading.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,666,405,866 | godot | Minimum window size of the editor is too large, especially for standard 1920x1080 monitors | ### Tested versions
- Reproducible in v4.3.stable.mono [77dcf97d8].
### System information
Godot v4.3.stable.mono (77dcf97d8) - cpe:/o:nixos:nixos:25.05 #1-NixOS SMP PREEMPT_DYNAMIC Thu Nov 14 12:19:41 UTC 2024 - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 550 / 550 Series (RADV POLARIS12) - 12th Gen Intel(R) Core(TM) i7-12700F (20 Threads)
### Issue description
On Gnome, you cannot tile Godot's window with the built-in edge tiling feature if your monitor's size is too small. This is because when you tile a window, Gnome needs to resize the window to either the left or right half portion of your screen. If the window can't be resized to fit half the width of your screen, Gnome will simply not tile it.
Godot currently hard-codes the minimum window size to an unusually large 1024x876, which means it is impossible to tile on a standard 1920x1080 monitor. This seriously hurts usability, like when you want to tile Godot and your IDE side-by-side.
### Steps to reproduce
1. Attempt to tile Godot in a Gnome environment, by dragging the window to either the left or right edge of your screen.
### Minimal reproduction project (MRP)
N/A | discussion,topic:editor,usability | low | Minor |
2,666,434,774 | react | [Compiler Bug]: False positive local mutation warning inside filter | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [x] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhASwLYAcIwC4AEASggIZyEBmMEGBA5DGRfQDoB2HCAHjvgZSjsKaCOwIBhWjnYJ2eABTA0eBBjABfAJQFgHAgThiwhE1EqUCAXmLM8AOihgEAWTUQFCnVYB8u-QYEADYIhGhgUhBB1gKkQc4BBkbsJgSQGAgAkqrqMSpqYImB9pRoQaowCnCkTgjWfnrigc1ollU1ziUQEDqNzf0E4ZHRNngwUAhFzRpTgUx4sOJjE7PaHEWtBArpWTlg9iHsAOZ4ABYEfgAMvbPzi7o72QUANIMR3UEzTQZfP68A2vl1ABdLQBO4wcQAHgAJmgAG4EAD0Pg4XxAGiAA
### Repro steps
```js
import React from 'react'
export function Component({items}) {
const stuff = React.useMemo(() => {
let isCool = false
const someItems = items
.filter(cause => {
if (cause.foo) {
isCool = true
}
return true
})
if (someItems.length > 0) {
return {someItems, isCool}
}
}, [items])
return <div />
}
```
Getting:
```
InvalidReact: Reassigning a variable after render has completed can cause inconsistent behavior on subsequent renders. Consider using state instead. Variable `isCool` cannot be reassigned after render (9:9)
```
But it's not used after render. Curiously, changing the `if` condition to something that doesn't refer to `someItems` helps.
### How often does this bug happen?
Every time
### What version of React are you using?
18.2.0
### What version of React Compiler are you using?
19.0.0-beta-a7bf2bd-20241110 | Type: Bug,Component: Optimizing Compiler | low | Critical |
2,666,485,175 | PowerToys | clipboard history button | ### Description of the new feature / enhancement
Hi could you please include a clipboard history button so that you have an alternative to "Windows key + v". I don't use the feature regularly because I keep forgetting about it! A button would make it really convenient, especially with advanced paste!
Also while you are able to "always on top" the advance paste menu, you are not able to keep clipboard history open over other windows. I'm not sure if this could be changed to allow "always on top".
The ideal would be to combine the two menus into one (see the picture below), but if that's not feasible the clipboard history button would be awesome!
Thank you so much guys. As someone who has been exclusively a user (this is my first time using Github), we all appreciate your hard work and amazing features. There are a lot of us who haven't engaged but really benefit from and admire the work.
-Mo

### Scenario when this would be used?
This would be great for writing papers where you need paste all the references you've gathered.
### Supporting information
_No response_ | Needs-Triage,Needs-Team-Response | low | Minor |
2,666,491,410 | PowerToys | Windows Power Modes from PowerToys Run | ### Description of the new feature / enhancement
Searching PowerToys Run for any of the terms `power mode`, `best performance`, `balanced`, or `best power efficiency` would set the current Windows 11 power mode (the _Plugged in_ mode if computer is currently plugged in, and the _On battery_ mode if it's currently on battery).
### Scenario when this would be used?
I like my computer running efficiently for the majority of web browsing, emails, etc. that I do. But when playing a video game or running heavy applications, I like to be able to temporarily change power mode to _Balanced_ or even _Best Performance_.
### Supporting information
The power mode slider disappeared from Windows 10 quick access as described more fully [here on Reddit](https://www.reddit.com/r/Windows11/comments/ocwd6a/power_mode_slider_is_gone_from_windows_11s_quick/). Now one has to open _Settings→Power & Battery→Power mode_, then select the dropdown for either _Plugged in_ or _On battery_, and then select between _Best Power Efficency_, _Balanced_, or _Best Performance_. This takes a lot of clicks.
Searching for `power mode` in the Windows 11 Start Menu does bring me to the proper settings dropdown for now. But I don't really trust that to method to work consistently.

| Needs-Triage | low | Major |
2,666,521,199 | pytorch | TypeError: Type parameter +RV without a default follows type parameter with a default in _inductor/utils.py | ### 🐛 Describe the bug
env:
python 3.9,
torch.__version__='2.5.1' also observed in 2.4.1
when importing transformers and torch I get error triggered by typing problem in this line:
`class CachedMethod(Protocol, Generic[P, RV]):`
https://github.com/pytorch/pytorch/blob/99014a297c179862af38ee86bac2051434d3db41/torch/_inductor/utils.py#L459C1-L459C46
`TypeError: Type parameter +RV without a default follows type parameter with a default`
suggegsted solution:
replace `class CachedMethod(Protocol, Generic[P, RV]):` with `class CachedMethod(Protocol, Generic[RV, P]):`
in https://github.com/pytorch/pytorch/blob/99014a297c179862af38ee86bac2051434d3db41/torch/_inductor/utils.py#L459C1-L459C46
possibly related to https://github.com/pytorch/pytorch/pull/127685
ERROR STACK BELOW:
```
File "/conda/envs/main/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1778, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/conda/envs/main/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/conda/envs/main/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 40, in <module>
from ...modeling_utils import PreTrainedModel, SequenceSummary
File "/conda/envs/main/lib/python3.9/site-packages/transformers/modeling_utils.py", line 48, in <module>
from .loss.loss_utils import LOSS_MAPPING
File "/conda/envs/main/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 19, in <module>
from .loss_deformable_detr import DeformableDetrForObjectDetectionLoss, DeformableDetrForSegmentationLoss
File "/conda/envs/main/lib/python3.9/site-packages/transformers/loss/loss_deformable_detr.py", line 4, in <module>
from ..image_transforms import center_to_corners_format
File "/conda/envs/main/lib/python3.9/site-packages/transformers/image_transforms.py", line 22, in <module>
from .image_utils import (
File "/conda/envs/main/lib/python3.9/site-packages/transformers/image_utils.py", line 58, in <module>
from torchvision.transforms import InterpolationMode
File "/conda/envs/main/lib/python3.9/site-packages/torchvision/__init__.py", line 10, in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
File "/conda/envs/main/lib/python3.9/site-packages/torchvision/models/__init__.py", line 2, in <module>
from .convnext import *
File "/conda/envs/main/lib/python3.9/site-packages/torchvision/models/convnext.py", line 8, in <module>
from ..ops.misc import Conv2dNormActivation, Permute
File "/conda/envs/main/lib/python3.9/site-packages/torchvision/ops/__init__.py", line 23, in <module>
from .poolers import MultiScaleRoIAlign
File "/conda/envs/main/lib/python3.9/site-packages/torchvision/ops/poolers.py", line 10, in <module>
from .roi_align import roi_align
File "/conda/envs/main/lib/python3.9/site-packages/torchvision/ops/roi_align.py", line 7, in <module>
from torch._dynamo.utils import is_compile_supported
File "/conda/envs/main/lib/python3.9/site-packages/torch/_dynamo/__init__.py", line 39, in <module>
from .polyfills import loader as _ # usort: skip # noqa: F401
File "/conda/envs/main/lib/python3.9/site-packages/torch/_dynamo/polyfills/loader.py", line 22, in <module>
POLYFILLED_MODULES: Tuple["ModuleType", ...] = tuple(
File "/conda/envs/main/lib/python3.9/site-packages/torch/_dynamo/polyfills/loader.py", line 23, in <genexpr>
importlib.import_module(f".{submodule}", package=polyfills.__name__)
File "/conda/envs/main/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/conda/envs/main/lib/python3.9/site-packages/torch/_dynamo/polyfills/builtins.py", line 24, in <module>
def all(iterable: Iterable[object], /) -> bool:
File "/conda/envs/main/lib/python3.9/site-packages/torch/_dynamo/decorators.py", line 312, in wrapper
rule_map: Dict[Any, Type[VariableTracker]] = get_torch_obj_rule_map()
File "/conda/envs/main/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 2860, in get_torch_obj_rule_map
obj = load_object(k)
File "/conda/envs/main/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 2891, in load_object
val = _load_obj_from_str(x[0])
File "/conda/envs/main/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 2875, in _load_obj_from_str
return getattr(importlib.import_module(module), obj_name)
File "/conda/envs/main/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/conda/envs/main/lib/python3.9/site-packages/torch/_higher_order_ops/map.py", line 6, in <module>
from torch._functorch.aot_autograd import AOTConfig, create_joint
File "/conda/envs/main/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 20, in <module>
from torch._inductor.utils import BoxedBool
File "/conda/envs/main/lib/python3.9/site-packages/torch/_inductor/utils.py", line 456, in <module>
class CachedMethod(Protocol, Generic[P, RV]):
File "/conda/envs/main/lib/python3.9/typing.py", line 277, in inner
return func(*args, **kwds)
File "/conda/envs/main/lib/python3.9/typing.py", line 1005, in __class_getitem__
return _GenericAlias(cls, params)
File "/conda/envs/main/lib/python3.9/typing.py", line 746, in __init__
self.__parameters__ = _collect_type_vars(params)
File "/conda/envs/main/lib/python3.9/site-packages/yt/packages/typing_extensions.py", line 3019, in _collect_type_vars
raise TypeError(f'Type parameter {t!r} without a default'
TypeError: Type parameter +RV without a default follows type parameter with a default
```
### Versions
python 3.9
torch.__version__='2.5.1'
cc @ezyang @malfet @xuzhao9 @gramster @chauhang @penguinwu | module: typing,triaged,actionable,oncall: pt2 | low | Critical |
2,666,545,031 | tauri | [feat] data-tauri-drag-region should handle touch controls | ### Describe the problem
Currently, `data-tauri-drag-region` does not work with touch controls. On Windows 11 touch controls do not work at all. On Linux (X11/Plasma 6) the drag gesture does not work and instead the window jumps when tapping on the region and then on another position on the screen.
### Describe the solution you'd like
`data-tauri-drag-region` should handle touch controls in a similar way the platform's native title bar does.
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,666,574,749 | rust | ICE: `Encountered anon const with inference variable args but no error reported` | Unfortunately, I can't share the code that caused this ICE, and have not (yet) been able to minimize it into something I can share.
<!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
Reproduce with:
```bash
cargo bisect-rustc --start=2024-11-09 --end=2024-11-16 -- build -p fingerprint
```
Regression in https://github.com/rust-lang-ci/rust/commit/5380d568a18ae118bdb902b293e2df2cd7ab1dd7
The PR introducing the regression in this rollup is #132927: Consolidate type system const evaluation under `traits::eva…
```
searched nightlies: from nightly-2024-11-09 to nightly-2024-11-16
regressed nightly: nightly-2024-11-13
searched commit range: https://github.com/rust-lang/rust/compare/81eef2d362a6f03db6f8928f82d94298d31eb81b...f7273e0044ad8f35ad27282e4ab776af50b61a54
regressed commit: https://github.com/rust-lang/rust/commit/5700240affd222f69b8755e2ff5d4ccaae9e6cf9
```
### Error output
```
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: Encountered anon const with inference variable args but no error reported
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
error: internal compiler error: Encountered anon const with inference variable args but no error reported
|
= note: delayed at compiler/rustc_trait_selection/src/traits/mod.rs:592:27
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::delayed_bug::<&str>
4: rustc_trait_selection::traits::try_evaluate_const.cold
5: <rustc_trait_selection::traits::normalize::AssocTypeNormalizer as rustc_type_ir::fold::TypeFolder<rustc_middle::ty::context::TyCtxt>>::fold_const
6: <rustc_middle::ty::InstantiatedPredicates as rustc_type_ir::fold::TypeFoldable<rustc_middle::ty::context::TyCtxt>>::try_fold_with::<rustc_trait_selection::traits::normalize::AssocTypeNormalizer>
7: <rustc_hir_typeck::fn_ctxt::FnCtxt>::instantiate_value_path
8: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_path
9: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
10: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
11: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_block
12: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
13: rustc_hir_typeck::check::check_fn
14: rustc_hir_typeck::typeck
15: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
16: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
17: rustc_query_impl::query_impl::typeck::get_query_incr::__rust_end_short_backtrace
18: <rustc_middle::hir::map::Map>::par_body_owners::<rustc_hir_analysis::check_crate::{closure#4}>::{closure#0}
19: rustc_hir_analysis::check_crate
20: rustc_interface::passes::run_required_analyses
21: rustc_interface::passes::analysis
22: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
23: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
24: rustc_query_impl::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
25: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
26: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
27: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
28: std::sys::pal::unix::thread::Thread::new::thread_start
29: start_thread
at ./nptl/pthread_create.c:447:8
30: clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
error: internal compiler error: Encountered anon const with inference variable args but no error reported
|
= note: delayed at compiler/rustc_trait_selection/src/traits/mod.rs:592:27
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::delayed_bug::<&str>
4: rustc_trait_selection::traits::try_evaluate_const.cold
5: <rustc_trait_selection::traits::normalize::AssocTypeNormalizer as rustc_type_ir::fold::TypeFolder<rustc_middle::ty::context::TyCtxt>>::fold_const
6: <rustc_trait_selection::traits::fulfill::FulfillProcessor as rustc_data_structures::obligation_forest::ObligationProcessor>::process_obligation
7: <rustc_data_structures::obligation_forest::ObligationForest<rustc_trait_selection::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection::traits::fulfill::FulfillProcessor>
8: <rustc_hir_typeck::fn_ctxt::FnCtxt>::structurally_resolve_type
9: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
10: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_block
11: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
12: rustc_hir_typeck::check::check_fn
13: rustc_hir_typeck::typeck
14: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
15: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
16: rustc_query_impl::query_impl::typeck::get_query_incr::__rust_end_short_backtrace
17: <rustc_middle::hir::map::Map>::par_body_owners::<rustc_hir_analysis::check_crate::{closure#4}>::{closure#0}
18: rustc_hir_analysis::check_crate
19: rustc_interface::passes::run_required_analyses
20: rustc_interface::passes::analysis
21: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
22: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
23: rustc_query_impl::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
24: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
25: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
26: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
27: std::sys::pal::unix::thread::Thread::new::thread_start
28: start_thread
at ./nptl/pthread_create.c:447:8
29: clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
error: internal compiler error: Encountered anon const with inference variable args but no error reported
|
= note: delayed at compiler/rustc_trait_selection/src/traits/mod.rs:592:27
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::delayed_bug::<&str>
4: rustc_trait_selection::traits::try_evaluate_const.cold
5: <rustc_trait_selection::traits::normalize::AssocTypeNormalizer as rustc_type_ir::fold::TypeFolder<rustc_middle::ty::context::TyCtxt>>::fold_const
6: <rustc_middle::ty::predicate::Predicate as rustc_type_ir::fold::TypeSuperFoldable<rustc_middle::ty::context::TyCtxt>>::try_super_fold_with::<rustc_trait_selection::traits::normalize::AssocTypeNormalizer>
7: <rustc_trait_selection::traits::fulfill::FulfillProcessor as rustc_data_structures::obligation_forest::ObligationProcessor>::process_obligation
8: <rustc_data_structures::obligation_forest::ObligationForest<rustc_trait_selection::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection::traits::fulfill::FulfillProcessor>
9: <rustc_hir_typeck::fn_ctxt::FnCtxt>::structurally_resolve_type
10: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
11: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_block
12: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
13: rustc_hir_typeck::check::check_fn
14: rustc_hir_typeck::typeck
15: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
16: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
17: rustc_query_impl::query_impl::typeck::get_query_incr::__rust_end_short_backtrace
18: <rustc_middle::hir::map::Map>::par_body_owners::<rustc_hir_analysis::check_crate::{closure#4}>::{closure#0}
19: rustc_hir_analysis::check_crate
20: rustc_interface::passes::run_required_analyses
21: rustc_interface::passes::analysis
22: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
23: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
24: rustc_query_impl::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
25: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
26: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
27: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
28: std::sys::pal::unix::thread::Thread::new::thread_start
29: start_thread
at ./nptl/pthread_create.c:447:8
30: clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
error: internal compiler error: Encountered anon const with inference variable args but no error reported
|
= note: delayed at compiler/rustc_trait_selection/src/traits/mod.rs:592:27
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::delayed_bug::<&str>
4: rustc_trait_selection::traits::try_evaluate_const.cold
5: rustc_trait_selection::traits::const_evaluatable::is_const_evaluatable.cold
6: <rustc_trait_selection::traits::fulfill::FulfillProcessor as rustc_data_structures::obligation_forest::ObligationProcessor>::process_obligation
7: <rustc_data_structures::obligation_forest::ObligationForest<rustc_trait_selection::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection::traits::fulfill::FulfillProcessor>
8: <rustc_hir_typeck::fn_ctxt::FnCtxt>::structurally_resolve_type
9: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
10: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_block
11: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
12: rustc_hir_typeck::check::check_fn
13: rustc_hir_typeck::typeck
14: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
15: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
16: rustc_query_impl::query_impl::typeck::get_query_incr::__rust_end_short_backtrace
17: <rustc_middle::hir::map::Map>::par_body_owners::<rustc_hir_analysis::check_crate::{closure#4}>::{closure#0}
18: rustc_hir_analysis::check_crate
19: rustc_interface::passes::run_required_analyses
20: rustc_interface::passes::analysis
21: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
22: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
23: rustc_query_impl::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
24: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
25: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
26: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
27: std::sys::pal::unix::thread::Thread::new::thread_start
28: start_thread
at ./nptl/pthread_create.c:447:8
29: clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/yotam/taklit-rs-1/rustc-ice-2024-11-17T21_38_08-674929.txt` to your bug report
note: compiler flags: -Z unstable-options --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED] -C link-arg=-fuse-ld=lld -Z next-solver=coherence
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
error: could not compile `fingerprint` (lib)
```
</p>
</details>
| I-ICE,T-compiler,C-bug,T-types,S-needs-repro | low | Critical |
2,666,581,995 | rust | How should we handle SPARC's vector ABI? | This concerns SPARC and its `vis` AKA [Visual Instruction Set][vis-wiki]. Quoting from @taiki-e
> AFAIK it's at least 64-bit.
> - [GCC's SPARC VIS builtins provides `vector_size (8)` (64-bit) and `vector_size (4)` (32-bit) vectors](https://gcc.gnu.org/onlinedocs/gcc/SPARC-VIS-Built-in-Functions.html).
> - According to [calling conventions implemented by GCC](https://github.com/gcc-mirror/gcc/blob/730f28b081bea4a749f9b82902446731ec8faa93/gcc/config/sparc/sparc.cc#L6620-L6690):
> - On SPARC32: 64-bit or smaller vector integer is passed using int reg (argument) / FP reg (return value)
> - On SPARC64: 128-bit(argument)/256-bit(return value) or smaller vector integer/float are passed using FP reg
>
> SPARC FP registers (f[0-63]) are 32-bit long, and two/four of these are combined to process f64/f128. 64-bit VIS vectors also use two FP registers, as does f64.
>
> 128-bit/258-bit vectors are also passed or returned using four/eight FP registers.
https://github.com/gcc-mirror/gcc/blob/730f28b081bea4a749f9b82902446731ec8faa93/gcc/config/sparc/sparc.cc#L7388
>
> In any case, LLVM doesn't currently support Vector ABI (https://github.com/llvm/llvm-project/issues/45418), so it seems that using vlen(0) in the lint is correct for now.
>
> SPARC's Vector ABI is defined based on the existing float and aggregate calling convention, not the VIS ISA [^1], and changing it without a new ABI would also break other non-vector arguments due to the nature of using FP registers. So, I don't believe it can be changed without a new ABI. (This is very different from the x86_64, which extended the ISA in the form of increasing the size of the vector registers.)
See also:
- https://github.com/rust-lang/rust/pull/132842#discussion_r1835660858
- https://github.com/rust-lang/rust/issues/131800#issuecomment-2474486443
[vis-wiki]: https://en.wikipedia.org/wiki/Visual_Instruction_Set | T-compiler,A-SIMD,O-SPARC,A-ABI,E-needs-design | low | Minor |
2,666,585,465 | storybook | [Bug]: Error when using `+page.svelte` files as `component` in stories | ### Describe the bug
When using Svelte 5 and SvelteKit, there's an error thrown for any story that uses a `+page.svelte` route as the meta `component`:
```
ReferenceError: Page is not defined
at http://localhost:6007/src/routes/+page.svelte:33:27
```
This happens because the Svelte Docgen plugin calculates the component name wrong, based on Svelte 4's component name algorithm, which has changed slightly in Svelte 5, especially for filenames with special characters like `+`.
If we add the Vite Inspect plugin to the reproduction, we'll see that the `.__docgen` property is added to the wrong function name:

The current (broken) algorithm for calculating the component name is here:
https://github.com/storybookjs/storybook/blob/7e43ecd708d5d76f7473df35b89cf5728b2c5a7d/code/frameworks/svelte-vite/src/plugins/svelte-docgen.ts#L41-L71
But what Svelte 5 does is slightly different:
1. `get_component_name`
https://github.com/sveltejs/svelte/blob/396ea2ef370e7ea5b5d4571c4e5e14384bac3ca6/packages/svelte/src/compiler/phases/2-analyze/index.js#L208-L217
2. Which is passed through `module.scope.generate`:
https://github.com/sveltejs/svelte/blob/main/packages/svelte/src/compiler/phases/2-analyze/index.js#L399
3. And that `generate` does:
https://github.com/sveltejs/svelte/blob/396ea2ef370e7ea5b5d4571c4e5e14384bac3ca6/packages/svelte/src/compiler/phases/scope.js#L129-L150
If you combine that whole flow into a single function it would naively be something like this:
```js
function get_component_name(filename) {
const parts = filename.split(/[/\\]/);
const basename = /** @type {string} */ (parts.pop());
const last_dir = /** @type {string} */ (parts.at(-1));
let name = basename.replace('.svelte', '');
if (name === 'index' && last_dir && last_dir !== 'src') {
name = last_dir;
}
name = name[0].toUpperCase() + name.slice(1);
// 👇 this is from generate()
return name.replace(/[^a-zA-Z0-9_$]/g, '_').replace(/^[0-9]/, '_');
}
```
That last line (from `generate`) replaces special characters like `+` with `_`. So it turns `src/routes/+page.svelte` into `_page`, while we turn it into `Page`.
### Reproduction link
https://github.com/simonhackler/Storybook-reproduction
### Reproduction steps
1. Install dependencies and start Storybook
2. Open the Page story and see the error
### System
```bash
Storybook Environment Info:
System:
OS: macOS 13.1
CPU: (10) arm64 Apple M1 Pro
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 18.20.2 - ~/Library/Caches/fnm_multishells/92912_1731183850302/bin/node
Yarn: 1.22.19 - ~/Library/Caches/fnm_multishells/92912_1731183850302/bin/yarn
npm: 10.8.3 - ~/Library/Caches/fnm_multishells/92912_1731183850302/bin/npm <----- active
pnpm: 9.12.0 - ~/Library/Caches/fnm_multishells/92912_1731183850302/bin/pnpm
Browsers:
Chrome: 130.0.6723.119
Edge: 131.0.2903.51
Safari: 16.2
npmPackages:
@storybook/addon-essentials: ^8.4.4 => 8.4.4
@storybook/addon-interactions: ^8.4.4 => 8.4.4
@storybook/addon-svelte-csf: ^5.0.0-next.11 => 5.0.0-next.11
@storybook/blocks: ^8.4.4 => 8.4.4
@storybook/svelte: ^8.4.4 => 8.4.4
@storybook/sveltekit: ^8.4.4 => 8.4.4
@storybook/test: ^8.4.4 => 8.4.4
storybook: ^8.4.4 => 8.4.4
```
### Additional context
Originally reported here: https://github.com/storybookjs/addon-svelte-csf/discussions/231#discussioncomment-11282610
We probably want a condition on the Svelte version, that uses the new algorithm when Svelte 5 is detected. In the perfect world we'd not try to generate the component name, but instead find it from the default export, either via a smart regex or AST parsing. | bug,svelte,sveltekit,docgen | low | Critical |
2,666,586,010 | rust | How should we handle matrix ABIs? | Some CPU architectures have developed "matrix extensions". These are sometimes equivalent to "vectors, but bigger" in terms of how the ABI should be handled (reusing the same architectural state, thus having similar concerns). But not always! They may use entirely different architectural state, usually entirely "caller-save" (i.e. always "volatile" or "call-clobbered").
## AArch64
Scalable Matrix Extensions
- https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/arm-scalable-matrix-extension-introduction
- https://github.com/rust-lang/rust/issues/133146
## PowerPC
MMA
- https://github.com/rust-lang/rust/issues/131800#issuecomment-2418346013
## x86
AMX
- https://github.com/rust-lang/rust/issues/126622
- introduces the `amx_tile` type, AKA `x86_amx` or `__tile1024i`
## References
- https://github.com/rust-lang/rust/issues/131800 | O-x86_64,T-compiler,O-PowerPC,O-AArch64,A-ABI,E-needs-investigation | low | Major |
2,666,588,995 | rust | How should we handle dynamic vector ABIs? | The primary unresolved question of https://github.com/rust-lang/rust/issues/131800 I would say is this edge-case. Opening this marker issue for now, I will explain a bit later but basically: we understand most of the factors here but we need to account for what we actually want to do.
## AArch64
- Scalable Vector Extension
- Scalable Matrix Extension: also see https://github.com/rust-lang/rust/issues/133144
## RISCV
- "V" extension: https://github.com/riscvarchive/riscv-v-spec/blob/master/v-spec.adoc | T-compiler,A-SIMD,O-riscv,O-AArch64,A-ABI,E-needs-design | low | Minor |
2,666,678,913 | PowerToys | [Hosts File Editor] Breaks sections created by other applications which leads to duplicates creation (e.g. Docker or Tailscale) | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Hosts File Editor
### Steps to reproduce
1. Install Docker or Tailscale.
2. Check content of the hosts file - there will be sections made with comments:
```
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
172.30.162.20 gateway.docker.internal
127.0.0.1 kubernetes.docker.internal
# End of section
# TailscaleHostsSectionStart
# This section contains MagicDNS entries for Tailscale.
# Do not edit this section manually.
100.xxx.xxx.xxx something.tailXXXX.ts.net.
...
# TailscaleHostsSectionEnd
```
4. Open Hosts File Editor and add/change some custom entries
5. Open hosts file again
6. Trigger Docker or Tailscale to update records in the hosts file
### ✔️ Expected Behavior
Sections are preserved, no duplicates are created after step 6
### ❌ Actual Behavior
All comments are moved to the top or the bottom of the file.
In most cases after step 6 you will see that Docker/Tailscale has added new records to the hosts file since they were not been able to locate old records within their section.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,666,758,217 | langchain | ChatHuggingFace with AgentExecutor (legacy) or create_react_agent (langgraph) tool outputs NEVER passed back to model | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Hi,
I’m a HuggingFace PRO user and I’m encountering an issue where I’m unable to use the agent (either legacy or langgraph) with tools, along with the default HuggingFace endpoints API. Any assistance or insights into resolving this bug would be greatly appreciated!
### Sample code from langchain doc
```python
model_id = "meta-llama/Llama-3.1-70B-Instruct"
#model_id = "microsoft/Phi-3-mini-4k-instruct"
from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
repo_id=model_id,
task="text-generation",
max_new_tokens=512,
temperature=0.1,
)
chat_model = ChatHuggingFace(llm=llm)
from langchain_core.tools import tool
@tool
def add(a: int, b: int) -> int:
"""Adds a and b.
Args:
a: first int
b: second int
"""
return a + b
@tool
def multiply(a: int, b: int) -> int:
"""Multiplies a and b.
Args:
a: first int
b: second int
"""
return a * b
tools = [add, multiply]
query = "What is 3 * 12?"
from langchain import hub
prompt = hub.pull("hwchase17/openai-functions-agent")
from langchain.agents import create_tool_calling_agent
agent = create_tool_calling_agent(chat_model, tools, prompt)
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools, max_iterations=3, verbose=True)
agent_executor.invoke({"input": query})
```
### Output
```console
> Entering new AgentExecutor chain...
Invoking: `multiply` with `{'a': 3, 'b': 12}`
36
Invoking: `multiply` with `{'a': 3, 'b': 12}`
36
Invoking: `multiply` with `{'a': 3, 'b': 12}`
36
> Finished chain.
{'input': 'What is 3 * 12?', 'output': 'Agent stopped due to max iterations.'}
```
### Langgraph agent
```python
from langchain_core.messages import SystemMessage
from langgraph.prebuilt import create_react_agent
system_message = SystemMessage(content="You are a helpful assistant.")
langgraph_agent_executor = create_react_agent(
chat_model, tools, state_modifier=system_message
)
query = "What is 3 * 12?"
langgraph_agent_executor.invoke({"messages": [("user", query)]})
```
### Output
```console
---------------------------------------------------------------------------
GraphRecursionError Traceback (most recent call last)
<ipython-input-105-655a70a8ca89> in <cell line: 11>()
9
10
---> 11 langgraph_agent_executor.invoke({"messages": [("user", query)]})
1 frames
/usr/local/lib/python3.10/dist-packages/langgraph/pregel/__init__.py in stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
1630 error_code=ErrorCode.GRAPH_RECURSION_LIMIT,
1631 )
-> 1632 raise GraphRecursionError(msg)
1633 # set final channel values as run output
1634 run_manager.on_chain_end(loop.output)
GraphRecursionError: Recursion limit of 25 reached without hitting a stop condition. You can increase the limit by setting the `recursion_limit` config key.
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/GRAPH_RECURSION_LIMIT
```
> Note 1
> It works with an agent chain + tools and the depreciated AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION:
```python
(...) # Other tool definition
from langchain.agents import initialize_agent, AgentType
agent_chain = initialize_agent(tool_decorators, chat_model, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION)
messages = [
SystemMessage(content="You're a helpful assistant."),
HumanMessage(
content="What is the price of 1 cappuccino please?"
),
]
agent_chain.invoke(messages)
```
```console
{'input': [SystemMessage(content="You're a helpful assistant.", additional_kwargs={}, response_metadata={}), HumanMessage(content='What is the price of 1 cappuccino please?', additional_kwargs={}, response_metadata={})], 'output': 'A cappuccino costs 4.75.'}
```
> Note 2
> It works with ChatHuggingFace and bind_tools method:
```python
(...)
chat_with_tools = chat_model.bind_tools(tools)
query = "What is 3 * 12?"
from langchain_core.messages import HumanMessage, ToolMessage
messages = [HumanMessage(query)]
ai_msg = chat_with_tools.invoke(messages)
messages.append(ai_msg)
for tool_call in ai_msg.tool_calls:
selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()]
tool_output = selected_tool.invoke(tool_call["args"])
messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))
chat_model.invoke(messages).content
```
```console
3 * 12 = 36
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* I'm trying to use AgentExecutor (legacy) with tools and ChatHuggingFace from langchain-huggingface
* I'm trying to use react AgentExecutor (langgraph) with tools and and ChatHuggingFace from langchain-huggingface
* I expect to have the same result as with AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION (it works with!)
* Tool outputs NEVER passed back to model
### System Info
langchain==0.3.7
langchain-community==0.3.7
langchain-core==0.3.17
langchain-huggingface==0.1.2
langgraph==0.2.50
langgraph-checkpoint==2.0.4
langgraph-sdk==0.1.36 | 🤖:bug | low | Critical |
2,666,764,215 | godot | Exported properties of parent class not showing on derived class in editor | ### Tested versions
4.3 modules
### System information
Windows 10, godot 4.3, RTX3070TI
### Issue description
Class A, class B inherits A:
Class A exported properties not showing in Class B in the editor (C++ modules)
### Steps to reproduce
Create a class A.
create a class B that inherits from A.
Export a variable in A.
Export a variable in B.
Build, go to editor, Class B doesn't have Class A exported var
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/17792970/MRP.zip)
| bug,topic:editor,needs testing | low | Minor |
2,666,770,568 | rust | rustdoc skips first line of some list items, and incorrect clippy warning | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code (minimal reproduction `lib.rs`):
```rust
/// - [`SDL_PROP_WINDOW_CREATE_COCOA_WINDOW_POINTER`]: the
/// `(__unsafe_unretained)` NSWindow associated with the window, if you want
/// to wrap an existing window.
/// - [`SDL_PROP_WINDOW_CREATE_COCOA_VIEW_POINTER`]: the `(__unsafe_unretained)`
/// NSView associated with the window, defaults to `[window contentView]`
pub fn a() {}
/// - [`SDL_PROP_RENDERER_MAX_TEXTURE_SIZE_NUMBER`]: the maximum texture width
/// and height
/// - [`SDL_PROP_RENDERER_TEXTURE_FORMATS_POINTER`]: a (const [`SDL_PixelFormat`] *)
/// array of pixel formats, terminated with [`SDL_PIXELFORMAT_UNKNOWN`],
/// representing the available texture formats for this renderer.
pub fn b() {}
pub const SDL_PROP_WINDOW_CREATE_COCOA_WINDOW_POINTER: () = ();
pub const SDL_PROP_WINDOW_CREATE_COCOA_VIEW_POINTER: () = ();
pub const SDL_PROP_RENDERER_MAX_TEXTURE_SIZE_NUMBER: () = ();
pub const SDL_PROP_RENDERER_TEXTURE_FORMATS_POINTER: () = ();
pub const SDL_PIXELFORMAT_UNKNOWN: () = ();
#[allow(non_camel_case_types)]
pub type SDL_PixelFormat = ();
```
When I run `cargo doc` on this, there are no warnings or errors, but the output is wrong. The first line of the first list item of `a` is missing, and the first line of the second list item of `b` is also missing.


If I run `cargo clippy` on the file it complains:
```
warning: doc list item without indentation
--> src/lib.rs:3:5
|
3 | /// to wrap an existing window.
| ^^
|
= help: if this is supposed to be its own paragraph, add a blank line
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#doc_lazy_continuation
= note: `#[warn(clippy::doc_lazy_continuation)]` on by default
help: indent this line
|
3 | /// to wrap an existing window.
| +++++++++++++++++++++++++++++++++++++++++++++++++++++++
warning: doc list item without indentation
--> src/lib.rs:12:5
|
12 | /// representing the available texture formats for this renderer.
| ^^
|
= help: if this is supposed to be its own paragraph, add a blank line
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#doc_lazy_continuation
help: indent this line
|
12 | /// representing the available texture formats for this renderer.
| +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
warning: `rustdocbug` (lib) generated 2 warnings
```
But this makes no sense. Those lines have the correct indentation, same as the previous lines. It seems to think there's a new sublist getting created at the end of the first lines, but I don't see why
### Meta
`rustc --version --verbose`:
```
% rustc --version --verbose
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: aarch64-apple-darwin
release: 1.82.0
LLVM version: 19.1.1
% rustc +nightly --version --verbose
rustc 1.84.0-nightly (798fb83f7 2024-10-16)
binary: rustc
commit-hash: 798fb83f7d24e31b16acca113496f39ff168c143
commit-date: 2024-10-16
host: aarch64-apple-darwin
release: 1.84.0-nightly
LLVM version: 19.1.1
```
| T-rustdoc,A-intra-doc-links,C-external-bug | low | Critical |
2,666,790,240 | ant-design | tabs使用官网的自定义标签可拖拽功能后,向后拖动标签会出现滚动条重置问题 | ### Reproduction link
[](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-klwtg4)
### Steps to reproduce
1.把官网的antd5的tabs的可拖拽标签用例复制过来
2.改动第三个标签的children能出现滚动条
3.将第三个标签拖动到前面会出现滚动条重置

### What is expected?
不会导致滚动条重置
### What is actually happening?
滚动条重置
| Environment | Info |
| --- | --- |
| antd | 5.22.1 |
| React | ^18.3.1 |
| System | windows10 |
| Browser | 谷歌浏览器v130.0.6723.119 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,improvement,🧶 Low Priority | low | Minor |
2,666,797,267 | opencv | opencv4.9.0 Huawei CANN support error: CANN check failed in function 'forward' | ### System Information

### Detailed description
编译cann版本opencv4.9.0后,在推理yolov8代码时经常会出现以下错误(有时又不会出现):
[ INFO:0@51.135] global net_cann.cpp:204 initBackend DNN/CANN: converting layer onnx_node!/model.22/Add_1@NaryEltwise@361 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer onnx_node!/model.22/Add_2@NaryEltwise@362 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer /model.22/Constant_11_output_0@Const@363 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer onnx_node!/model.22/Div_1@NaryEltwise@364 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer onnx_node!/model.22/Sub_1@NaryEltwise@365 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer onnx_node!/model.22/Concat_4@Concat@366 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer /model.22/Constant_12_output_0@Const@367 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer onnx_node!/model.22/Mul_2@NaryEltwise@368 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer onnx_node!/model.22/Sigmoid@Sigmoid@369 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer onnx_node!/model.22/Concat_5@Concat@370 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:204 initBackend DNN/CANN: converting layer output0@Identity@371 to CANN operator
[ INFO:0@51.136] global net_cann.cpp:225 initBackend DNN/CANN: done converting layers to CANN operators
[ INFO:0@51.136] global net_cann.cpp:228 initBackend DNN/CANN: building ge::Graph
[ INFO:0@51.139] global net_cann.cpp:233 initBackend DNN/CANN: done building ge::Graph
[ INFO:0@51.139] global net_cann.cpp:236 initBackend DNN/CANN: converting ge::Graph to OM buffer
[ERROR:0@51.215] global net_cann.cpp:311 compileCannGraph CANN graph check failed, ret = 4294967295
OpenCV(4.9.0) Error: Unspecified error (CANN graph check failed) in compileCannGraph, file /root/data/opencv-4.9.0/modules/dnn/src/net_cann.cpp, line 311
compileCannGraph OpenCV(4.9.0) /root/data/opencv-4.9.0/modules/dnn/src/net_cann.cpp:311: error: (-2:Unspecified error) CANN graph check failed in function 'compileCannGraph'
Model infer Failed!
--- >> img path: /ubuntu_test/zz_bj_meter_read_value/images/108_20240514_result_result_result.jpg
[ERROR:0@51.556] global op_cann.cpp:211 forward CANN check failed, ret = 507899
OpenCV(4.9.0) Error: Unspecified error (CANN check failed) in forward, file /root/data/opencv-4.9.0/modules/dnn/src/op_cann.cpp, line 211
forward OpenCV(4.9.0) /root/data/opencv-4.9.0/modules/dnn/src/op_cann.cpp:211: error: (-2:Unspecified error) CANN check failed in function 'forward'
### Steps to reproduce
编译命令:
cmake -D BUILD_opencv_world=ON -D CMAKE_BUILD_TYPE=Debug -D OPENCV_GENERATE_PKGCONFIG=ON -D CMAKE_INSTALL_PREFIX=/root/data/opencv-4.9.0/install -D OPENCV_EXTRA_MODULES_PATH=/root/data/opencv-4.9.0/opencv_contrib-4.9.0/modules -D BUILD_opencv_wechat_qrcode=ON -D WITH_CANN=ON -D BUILD_DOCS=ON -D BUILD_EXAMPLES=ON -D INSTALL_C_EXAMPLES=ON -D WITH_GSTREAMER=OFF -D OPENCV_ENABLE_NONFREE=ON -D BUILD_opencv_python3=OFF -D WITH_LAPACK=ON -D WITH_EIGEN=ON -D WITH_OPENGL=ON -D WITH_FFMPEG=ON -D WITH_FREETYPE=ON -D BUILD_opencv_freetype=ON -D BUILD_opencv_python2=OFF -D BUILD_opencv_python3=OFF -D BUILD_opencv_gapi=OFF ..
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Critical |
2,666,835,883 | transformers | IsADirectoryError when training with tqdm enabled for trainer | ### System Info
Error info:
```python
**IsADirectoryError**: [Errno 21] Is a directory: '\n <div>\n \n <progress value=\'2\' max=\'108\' style=\'width:300px; height:20px; vertical-align: middle;\'></progress>\n [ 2/108 : < :, Epoch 0.04/4]\n </div>\n <table border="1" class="dataframe">\n <thead>\n <tr style="text-align: left;">\n <th>Step</th>\n <th>Training Loss</th>\n <th>Validation Loss</th>\n </tr>\n </thead>\n <tbody>\n </tbody>\n</table><p>'
```
Code:
```
training_args = transformers.TrainingArguments(
num_train_epochs=4, # Number of training epochs
per_device_train_batch_size=batch_size, # Batch size for training
per_device_eval_batch_size=batch_size, # Batch size for evaluation
gradient_accumulation_steps=2, # Number of steps to accumulate gradients before updating
gradient_checkpointing=True, # Enable gradient checkpointing to save memory
do_eval=True, # Perform evaluation during training
save_total_limit=2, # Limit the total number of saved checkpoints
evaluation_strategy="steps", # Evaluation strategy to use (here, at each specified number of steps)
save_strategy="steps", # Save checkpoints at each specified number of steps
save_steps=10, # Number of steps between each checkpoint save
eval_steps=10, # Number of steps between each evaluation
max_grad_norm=1, # Maximum gradient norm for clipping
warmup_ratio=0.1, # Warmup ratio for learning rate schedule
weight_decay=0.001, # Regularization technique to prevent overfitting
# fp16=True, # Enable mixed precision training with fp16 (enable it if Ampere architecture is unavailable)
bf16=True, # Enable mixed precision training with bf16
logging_steps=10, # Number of steps between each log
output_dir="outputs", # Directory to save the model outputs and checkpoints
optim="adamw_torch", # Optimizer to use (AdamW with PyTorch)
learning_rate=5e-5, # Learning rate for the optimizer
lr_scheduler_type="linear", # Learning rate scheduler type: constant
load_best_model_at_end=True, # Load the best model found during training at the end
metric_for_best_model="rouge", # Metric used to determine the best model
greater_is_better=True, # Indicates if a higher metric score is better
push_to_hub=False, # Whether to push the model to Hugging Face Hub
run_name="finetuning", # Name of the run for experiment tracking
report_to="wandb" # For experiment tracking (login to Weights & Biases needed)
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
```
Env info:
Jupyter version:
```
!jupyter --version
IPython : 8.27.0
ipykernel : 6.29.5
ipywidgets : 7.7.1
jupyter_client : 7.4.9
jupyter_core : 5.7.2
jupyter_server : 2.14.2
jupyterlab : 4.0.11
nbclient : 0.10.0
nbconvert : 7.16.4
nbformat : 5.10.4
notebook : 6.5.7
qtconsole : 5.6.0
traitlets : 5.14.3
```
Python: 3.10.11
jupyter lab: 4.0.11
transformers: 4.45.2
Detailed errors:
```
IsADirectoryError Traceback (most recent call last)
Cell In[28], line 1
----> 1 trainer.train()
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/transformers/trainer.py:2052, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
2050 hf_hub_utils.enable_progress_bars()
2051 else:
-> 2052 return inner_training_loop(
2053 args=args,
2054 resume_from_checkpoint=resume_from_checkpoint,
2055 trial=trial,
2056 ignore_keys_for_eval=ignore_keys_for_eval,
2057 )
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/transformers/trainer.py:2465, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2463 self.state.global_step += 1
2464 self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epoch
-> 2465 self.control = self.callback_handler.on_step_end(args, self.state, self.control)
2467 self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
2468 else:
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/transformers/trainer_callback.py:494, in CallbackHandler.on_step_end(self, args, state, control)
493 def on_step_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl):
--> 494 return self.call_event("on_step_end", args, state, control)
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/transformers/trainer_callback.py:516, in CallbackHandler.call_event(self, event, args, state, control, **kwargs)
514 def call_event(self, event, args, state, control, **kwargs):
515 for callback in self.callbacks:
--> 516 result = getattr(callback, event)(
517 args,
518 state,
519 control,
520 model=self.model,
521 tokenizer=self.tokenizer,
522 optimizer=self.optimizer,
523 lr_scheduler=self.lr_scheduler,
524 train_dataloader=self.train_dataloader,
525 eval_dataloader=self.eval_dataloader,
526 **kwargs,
527 )
528 # A Callback can skip the return of `control` if it doesn't change it.
529 if result is not None:
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/transformers/utils/notebook.py:307, in NotebookProgressCallback.on_step_end(self, args, state, control, **kwargs)
305 def on_step_end(self, args, state, control, **kwargs):
306 epoch = int(state.epoch) if int(state.epoch) == state.epoch else f"{state.epoch:.2f}"
--> 307 self.training_tracker.update(
308 state.global_step + 1,
309 comment=f"Epoch {epoch}/{state.num_train_epochs}",
310 force_update=self._force_next_update,
311 )
312 self._force_next_update = False
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/transformers/utils/notebook.py:143, in NotebookProgressBar.update(self, value, force_update, comment)
141 self.first_calls = self.warmup
142 self.wait_for = 1
--> 143 self.update_bar(value)
144 elif value <= self.last_value and not force_update:
145 return
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/transformers/utils/notebook.py:188, in NotebookProgressBar.update_bar(self, value, comment)
185 self.label += f", {1/self.average_time_per_item:.2f} it/s"
187 self.label += "]" if self.comment is None or len(self.comment) == 0 else f", {self.comment}]"
--> 188 self.display()
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/transformers/utils/notebook.py:229, in NotebookTrainingTracker.display(self)
227 self.html_code += self.child_bar.html_code
228 if self.output is None:
--> 229 self.output = disp.display(disp.HTML(self.html_code), display_id=True)
230 else:
231 self.output.update(disp.HTML(self.html_code))
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/IPython/core/display.py:432, in HTML.__init__(self, data, url, filename, metadata)
430 if warn():
431 warnings.warn("Consider using IPython.display.IFrame instead")
--> 432 super(HTML, self).__init__(data=data, url=url, filename=filename, metadata=metadata)
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/IPython/core/display.py:327, in DisplayObject.__init__(self, data, url, filename, metadata)
324 elif self.metadata is None:
325 self.metadata = {}
--> 327 self.reload()
328 self._check_data()
File /anaconda/envs/azureml_py38_PT_TF/lib/python3.10/site-packages/IPython/core/display.py:353, in DisplayObject.reload(self)
351 if self.filename is not None:
352 encoding = None if "b" in self._read_flags else "utf-8"
--> 353 with open(self.filename, self._read_flags, encoding=encoding) as f:
354 self.data = f.read()
355 elif self.url is not None:
356 # Deferred import
IsADirectoryError: [Errno 21] Is a directory: '\n <div>\n \n <progress value=\'2\' max=\'108\' style=\'width:300px; height:20px; vertical-align: middle;\'></progress>\n [ 2/108 : < :, Epoch 0.04/4]\n </div>\n <table border="1" class="dataframe">\n <thead>\n <tr style="text-align: left;">\n <th>Step</th>\n <th>Training Loss</th>\n <th>Validation Loss</th>\n </tr>\n </thead>\n <tbody>\n </tbody>\n</table><p>'
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This can be reproduced by the following code:
```
import time
import transformers
from transformers.utils.notebook import NotebookProgressBar
pbar = NotebookProgressBar(100)
for val in range(100):
pbar.update(val)
time.sleep(0.07)
pbar.update(100)
```
### Expected behavior
Training with progress bar being updated:

| Good First Issue,bug | low | Critical |
2,666,837,865 | PowerToys | Keyboard Manager not working after enabling in PowerToys | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update, GitHub
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Open PowerToys Settings.
Enable "Keyboard Manager".
Set up a key remapping (e.g., remap "End" to "Delete").
Press the remapped key (e.g., "End").
### ✔️ Expected Behavior
The key should respond according to the remapped configuration (e.g., pressing "End" should behave like "Delete").

### ❌ Actual Behavior
The remapped key does not respond as configured. Instead, it triggers its original function (e.g., pressing "End" still behaves like "Delete").
### Other Software


| Issue-Bug,Needs-Triage | low | Minor |
2,666,838,157 | stable-diffusion-webui | [Feature Request]: Added notification sound | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Not do, does. Not sending up as a pull request, more fyi if anyone wants the feature and how to do it.
Does:
Upon completion of image generation, or, completion of loading a checkpoint, make a notification sound.
### Proposed workflow
shared_total_tqdm.py:
```
import winsound
...
line 36 +
sound_file = "C:\\Windows\\Media\\notify.wav" # Path to the sound file
winsound.PlaySound(sound_file, winsound.SND_FILENAME)
```
sd_models.py:
same as above, + line 348 after
` timer.record("load weights from disk")`
### Additional information
That little notification sound is a very small but huge QoL improvement, imo. Merge if you like/want/whatever. | enhancement | low | Minor |
2,666,898,598 | ollama | Feature suggestions and development compilation environment issues | Wish:
1. set env avx=0 will automatically try to use Nvidia gpu
2. On this repository page, press `.` to enter a complete development environment to modify code, compile, download files, and run tests. Configuring this development environment is so complicated and difficult.
Good luck to you
-------
set env OLLAMA_HOST "0.0.0.0" can control the listening address. Many home users have old computers, but they can still be used. The motherboard of this old computer does not support AVX, but for gaming, it has NVIDIA GPU, such as 3060.
So, in the future, can we add an environment variable as a switch, with `avx = 0`, so that the GPU without avx is adopted?
I read the Issues and saw that many people enthusiastically shared how to modify the logic and compile and implement this function by themselves. This is particularly unfriendly for people who are not doing cross-platform development in Go, because they will be stuck in the environment preparation part. Maybe the source code can be changed correctly, but because the environment is very complicated, it cannot be compiled. Moreover, people who are not doing cross-platform development in Go will only compile this environment once and will not need it later.
If your official support can try to support GPU without avx when users modify and add an environment variable, it will be great.
Or, it would be good to have a development compilation environment. Just like, open github, press `.` and then start modifying the code, perform compilation, download the compiled file and test it. | feature request | low | Minor |
2,666,916,537 | ui | [bug]: sidebar.openMobile doesn't get reset on browser width resize > mobile layout | ### Describe the bug
If the sidebar is opened with mobile layout. openMobile is true. When the browser is resized above the mobile width, the sidebar closes but the openMobile is still set to true.
### Affected component/components
Sidebar
### How to reproduce
1. Open sidebar on mobile screen size
2. Resize window to greater than mobile screen size breakpoint
3. Observe the sidebar closes and openMobile is true.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Edge Latest
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,667,001,046 | PowerToys | FancyZones resets space around zone without user interaction | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones, FancyZones Editor
### Steps to reproduce
Open FancyZones Editor.
Turn off the "Space around zones" setting for each grid in use.
Restart the system (I think this is what's happening–because it works correctly for a little while)
The "Space around zones" will be turned back on.
### ✔️ Expected Behavior
I expected the setting to remain at its last value (turned off, in this case).
### ❌ Actual Behavior
The "Space around zones" will turn itself back on.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,667,014,377 | godot | AudioEffectCapture and AudioStreamGenerator producing slowed audio since after 4.4-dev3 | ### Tested versions
- Reproducible in v4.4.dev.custom_build [5efd124ca]
- Not reproducible in v4.4.dev3.official [f4af8201b]
### System information
Godot v4.4.dev (5efd124ca) - macOS 15.1.0 - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - AMD Radeon Pro 5500 XT OpenGL Engine - Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz (16 threads)
### Issue description
I have a setup where I route all of the output from a bus into an AudioEffectCapture, then populate an AudioStreamGenerator's buffer with its output. This allows me to play a bus's sound from a given AudioStreamPlayer2D/3D.
```gdscript
extends AudioStreamPlayer2D
@onready var playback: AudioStreamGeneratorPlayback
var capture_effect: AudioEffectCapture
func _ready():
play()
playback = get_stream_playback()
var bus_index = AudioServer.get_bus_index("Capturer")
var num_effects = AudioServer.get_bus_effect_count(bus_index)
capture_effect = AudioServer.get_bus_effect(bus_index, num_effects - 1)
func _process(delta):
var frames_available = capture_effect.get_frames_available()
if playback.can_push_buffer(frames_available):
var buf = capture_effect.get_buffer(frames_available)
capture_effect.clear_buffer()
playback.push_buffer(buf)
```
(I had implemented this in C++ with GDExtension for speed's sake, but ported it over to GDScript to confirm the issue persisted with it as well.)
In 4.4-dev3, this sounds pretty correct:
https://github.com/user-attachments/assets/1206ae3c-dace-4b71-8b68-0ad3d4294f3b
But in 4.4 on the master branch, it is slightly pitched down and crackles more:
https://github.com/user-attachments/assets/1e305e88-32cb-4a87-9e26-2d438f3105fd
### Steps to reproduce
MRPs are provided below for 4.4-dev3 and the master branch. They should be identical with the possible exception of structural project format changes made since dev3.
To test them, simply open and run `scene.tscn` (which is also the projects' main scene). When on 4.4-dev3 you should hear the music normally, and when on a version built from the master branch you should hear it slowed down, as seen in the videos above.
### Minimal reproduction project (MRP)
4.4-dev3 compatible project (where it sounds as intended): [lagging-audio-forwarding-4.4dev3.zip](https://github.com/user-attachments/files/17794352/lagging-audio-forwarding-4.4dev3.zip)
v4.4.dev.custom_build [5efd124ca] compatible project (where it sounds slowed and crackles): [lagging-audio-forwarding-4.4master.zip](https://github.com/user-attachments/files/17794359/lagging-audio-forwarding-4.4master.zip)
| bug,topic:audio,regression | low | Major |
2,667,259,652 | pytorch | [cudnn_frontend] Error: No execution plans support the graph when using CUDNN_ATTENTION in SDPA | ### 🐛 Describe the bug
```py
import torch
from torch.nn.attention import SDPBackend, sdpa_kernel
class MyModule(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor) -> torch.Tensor:
return torch.nn.functional.scaled_dot_product_attention(query, key, value)
with torch.inference_mode(), sdpa_kernel(SDPBackend.CUDNN_ATTENTION):
model = MyModule().eval().cuda().half()
inputs = [
torch.rand(2, 4, 8, 16, dtype=torch.half, device="cuda"),
torch.rand(2, 4, 8, 16, dtype=torch.half, device="cuda"),
torch.rand(2, 4, 8, 16, dtype=torch.half, device="cuda"),
]
print(model(*inputs))
```
```py
holywu@HOLYWU:~$ CUDNN_FRONTEND_LOG_FILE=frontendlog.txt CUDNN_FRONTEND_LOG_INFO=1 CUDNN_LOGLEVEL_DBG=3 CUDNN_LOGDEST_DBG=backendlog.txt python3 sdpa.py
Could not load library libnvrtc.so.12. Error: libnvrtc.so.12: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so. Error: libnvrtc.so: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "/home/holywu/sdpa.py", line 22, in <module>
print(model(*inputs))
^^^^^^^^^^^^^^
File "/home/holywu/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/holywu/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/holywu/sdpa.py", line 10, in forward
return torch.nn.functional.scaled_dot_product_attention(query, key, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: cuDNN Frontend error: [cudnn_frontend] Error: No execution plans support the graph.
```
[backendlog.txt](https://github.com/user-attachments/files/17795208/backendlog.txt)
[frontendlog.txt](https://github.com/user-attachments/files/17795209/frontendlog.txt)
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241117+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Ti
Nvidia driver version: 566.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i5-13400F
CPU family: 6
Model: 191
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
BogoMIPS: 4991.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 20 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241117+cu124
[pip3] torch_tensorrt==2.6.0.dev20241117+cu124
[pip3] torchvision==0.20.0.dev20241117+cu124
[conda] Could not collect
cc @csarofeen @ptrblck @xwang233 @eqy @msaroufim @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | module: cudnn,module: cuda,triaged,module: sdpa | low | Critical |
2,667,275,360 | node | test_runner: support mock file system | ### What is the problem this feature will solve?
The current test requires writing files to a real disk when reading and writing files related to the file system, which is very unsafe.
After all, the file may remain due to test case failure and other reasons.
### What is the feature you are proposing to solve the problem?
Under the mock namespace, APIs are exposed to provide a mock file system.
```js
mock.fileSystem.enable({ });
```
And provide some options
e.g.
Override the file that defined by defaults
```js
mock.fileSystem.enable({
files: {
'/path/to/file': new Uint8Array([0])
}
});
// it will read the virtual file system first if it does exist
// if not found, then read the read the real file system
fs.readFileSync('/path/to/file')
```
Override the whole file system
```js
mock.fileSystem.enable({
files: {
'/path/to/file': new Uint8Array([0])
},
override: true // override the whole file system
});
// This file is exists in the real file system
// But not define in the vitual file system, so it should throw an error
fs.readFileSync('/etc/proc')
```
### What alternatives have you considered?
[mock-fs](https://www.npmjs.com/package/mock-fs) | fs,feature request,test_runner | low | Critical |
2,667,306,068 | godot | Mouse jumps to random positions upon moving, when mode is set to confined and if warp mouse is used | ### Tested versions
Reproducible in 4.3 while using Macos and Windows, have not used in later/earlier versions
### System information
MACOS Sonoma 14 - Godot 4.3 - Vulkan 1.2.283 - Forward+
### Issue description
As said in the title, setting mouse mode to confined/confined hidden while using warp mouse commands in the main loop causes mouse to jump to a random position if movement is detected.
In my code, the cursor sprite has different behaviors that depend on what the player is doing (Sliding, taking knockback, scanning enemies, etc.) and it moves to a position relative to the player/enemy while these actions occur.
Upon finishing these actions, if the mouse mode is confined, on the first frame of moving the mouse will cause it to jump to a random position on the screen (Usually closeby, but still very noticable).
Note that it does **NOT** jump to these positions **immediately after** finishing the action (When warp mouse is called). You can see in the test where I leave the mouse confined that it is still on the cursor sprite **UNTIL I try to move the mouse**, which is when it "teleports". I also checked the mmouse global position and viewport position, and they also do not change **UNTIL I try to move the mouse.**
This only works in confined/confined hidden, and letting the mouse roam free doesn't reproduce this issue, although it's still annoying for when the game is set to windowed mode. This is not dependant on the viewport either, as the problem persists on windowed, fullscreen, borderless, etc.
https://github.com/user-attachments/assets/6288d0e3-f515-4277-94d0-77df3d261ac8
### Steps to reproduce
Here's the mandatory code to reproduce it on a base Sprite2D with a parent class Characterbody2D. There are some variables in this code that belong to the Characterbody, which are in bold.
An explanation of some of the variables:
-**Knockback/Paralyzed**: Mouse movement gets restricted and follows the sprite, in the code below. Player gets pushed back and looks at a position: **enemypos**
-**Interacting**: Similar to knockback and paralyzed, but instead the player stays still and looks at an object of interest: **interact_target**
-**Sliding**: A slide/dodge across the floor for about 2 seconds which restricts mouse movement, in the code below. Moves the sprite in front of the player and warps the mouse to the sprite.
These variables aren't mandatory but hopefully they give better insight on why this is happening, although I doubt it's their issue and I mainly suspect the mouse mode or the warp_mouse function.
This is the only script that alters mouse/cursor movement in the whole project:
extends Sprite2D
@onready var Player: CharacterBody2D = get_parent() #Some characterbody2d
func _ready():
Input.set_mouse_mode(Input.MOUSE_MODE_HIDDEN)
func _physics_process(delta):
if Player.**knockback** or Player.**isparalyzed**:
self.position = self.position.lerp(Player.position + (Player.position.direction_to(Player.**enemypos**) * 200), delta * 10) #Moves the sprite slightly in front of the player
Input.warp_mouse(get_viewport_transform() * global_position) #Warps the mouse to the sprite position
elif Player.**interacting**:
self.position = self.position.lerp(Player.position + (Player.position.direction_to(Player.**interact_target**.global_position) * 200), delta * 10) #Moves the sprite slightly in front of the player
Input.warp_mouse(get_viewport_transform() * global_position) #Warps the mouse to the sprite position
elif !Player.**knockback** and !Player.**interacting**:
if Player.**sliding**:
self.position = self.position.lerp(Player.position + (Player.position.direction_to(Player.slidelook) * 200), delta * 10) #Moves the sprite slightly in front of the player
Input.warp_mouse(get_viewport_transform() * global_position) #Warps the mouse to the sprite position
elif !Player.**sliding**:
self.position = self.position.lerp((get_global_mouse_position() + Vector2(0,-1)), delta * 30) #If the player isn't performing any special action, mouse moves freeform and sprite follows the mouse with a slight offset, so the mouse's tip is on the center of the sprite
if canprocess and Input.is_action_pressed("special"):
Input.warp_mouse(get_viewport_transform() * global_position) #If currently scanning an enemy, moves the mouse to the enemy currently being scanned, separate function not included here that moves the sprite to enemy position as well
### Minimal reproduction project (MRP)
N/A | bug,topic:input | low | Minor |
2,667,337,997 | tensorflow | Regular expression matches also directory in name | https://github.com/tensorflow/tensorflow/blame/5d1bf95155485aa137a13b72fbf3bd3e83b2f544/tensorflow/lite/CMakeLists.txt#L602
I found a problem tensorflow lite library compilation where I put my project into directory named:
`~/workspace/final_test_6.6.36$`
I found that "test_.*" matches also directory name in my directory path
FILTER "(.*_test_util_internal|test_.*|.*_ops_wrapper)\\.(cc|h)"
So files in kernels directory are not compiled - and library have a problem with linking | stat:awaiting tensorflower,type:bug,comp:lite | medium | Minor |
2,667,436,367 | PowerToys | Requesting audio sync feature | ### Description of the new feature / enhancement
Hi PowerToys Team,
First off, I just want to say thank you for all the amazing work you’re doing with PowerToys. It’s one of my go-to tools, and I’ve been loving how it keeps getting better with each update!
I have a feature request that I think could make PowerToys even more awesome: the ability to sync audio across different devices. Basically, something like AudioRelay, where you can play the same audio on multiple devices at once. It would be super helpful in setups where you want to hear audio across different devices, like when you're working on a PC but also want to share audio with your phone or tablet.
I really think this would be a great addition, and it aligns well with PowerToys' focus on improving productivity and making things more convenient.
Thanks so much for considering this request, and I’m excited to see what’s next for PowerToys
### Scenario when this would be used?
.
### Supporting information
. | Needs-Triage | low | Minor |
2,667,505,935 | langchain | NotImplementedError: Provider cohere model does not support chat. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
text = "One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. These are applications that can answer questions about specific source information. These applications use a technique known as Retrieval Augmented Generation, or RAG. This tutorial will show how to build a simple Q&A application over a text data source. Along the way we’ll go over a typical Q&A architecture and highlight additional resources for more advanced Q&A techniques. We’ll also see how LangSmith can help us trace and understand our application. LangSmith will become increasingly helpful as our application grows in complexity."
bedrock_runtime = boto3.client(service_name= "bedrock-runtime", region_name= "us-east-1")
llm = ChatBedrock(client=bedrock_runtime,
model_id="cohere.command-r-plus-v1:0",
)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=20, chunk_overlap=5)
splits = text_splitter.split_text(text)
vectorstore = Chroma.from_texts(texts=splits, embedding=BedrockEmbeddings(model_id="amazon.titan-embed-text-v1", region_name=""))
retriever = vectorstore.as_retriever()
prompt_template=PromptTemplate(
template=""" Use the following context: {context} to answer the question : {question}""",
input_variables=["context", "question"])
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={
"prompt": prompt_template,
},
chain_type="stuff",
)
qa_chain.invoke({"query": "What is LLM"})
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[18], [line 30](vscode-notebook-cell:?execution_count=18&line=30)
[16](vscode-notebook-cell:?execution_count=18&line=16) prompt_template=PromptTemplate(
[17](vscode-notebook-cell:?execution_count=18&line=17) template=""" Use the following context: {context} to answer the question : {question}""",
[18](vscode-notebook-cell:?execution_count=18&line=18) input_variables=["context", "question"])
[20](vscode-notebook-cell:?execution_count=18&line=20) qa_chain = RetrievalQA.from_chain_type(
[21](vscode-notebook-cell:?execution_count=18&line=21) llm=llm,
[22](vscode-notebook-cell:?execution_count=18&line=22) retriever=retriever,
(...)
[27](vscode-notebook-cell:?execution_count=18&line=27) chain_type="stuff",
[28](vscode-notebook-cell:?execution_count=18&line=28) )
---> [30](vscode-notebook-cell:?execution_count=18&line=30) qa_chain.invoke({"query": "What is LLM"})
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
[164](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:164) except BaseException as e:
[165](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:165) run_manager.on_chain_error(e)
--> [166](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:166) raise e
[167](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:167) run_manager.on_chain_end(outputs)
[169](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:169) if include_run_info:
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
[153](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:153) try:
[154](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:154) self._validate_inputs(inputs)
[155](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:155) outputs = (
--> [156](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:156) self._call(inputs, run_manager=run_manager)
[157](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:157) if new_arg_supported
[158](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:158) else self._call(inputs)
[159](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:159) )
[161](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:161) final_outputs: Dict[str, Any] = self.prep_outputs(
[162](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:162) inputs, outputs, return_only_outputs
[163](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:163) )
[164](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:164) except BaseException as e:
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:145, in BaseRetrievalQA._call(self, inputs, run_manager)
[143](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:143) else:
[144](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:144) docs = self._get_docs(question) # type: ignore[call-arg]
--> [145](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:145) answer = self.combine_documents_chain.run(
[146](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:146) input_documents=docs, question=question, callbacks=_run_manager.get_child()
[147](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:147) )
[149](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:149) if self.return_source_documents:
[150](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:150) return {self.output_key: answer, "source_documents": docs}
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:180, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
[178](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:178) warned = True
[179](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:179) emit_warning()
--> [180](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:180) return wrapped(*args, **kwargs)
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:605, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
[600](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:600) return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
[601](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:601) _output_key
[602](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:602) ]
[604](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:604) if kwargs and not args:
--> [605](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:605) return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
[606](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:606) _output_key
[607](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:607) ]
[609](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:609) if not kwargs and not args:
[610](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:610) raise ValueError(
[611](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:611) "`run` supported with either positional arguments or keyword arguments,"
[612](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:612) " but none were provided."
[613](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:613) )
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:180, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
[178](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:178) warned = True
[179](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:179) emit_warning()
--> [180](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:180) return wrapped(*args, **kwargs)
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:383, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[351](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:351) """Execute the chain.
[352](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:352)
[353](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:353) Args:
(...)
[374](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:374) `Chain.output_keys`.
[375](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:375) """
[376](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:376) config = {
[377](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:377) "callbacks": callbacks,
[378](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:378) "tags": tags,
[379](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:379) "metadata": metadata,
[380](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:380) "run_name": run_name,
[381](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:381) }
--> [383](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:383) return self.invoke(
[384](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:384) inputs,
[385](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:385) cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
[386](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:386) return_only_outputs=return_only_outputs,
[387](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:387) include_run_info=include_run_info,
[388](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:388) )
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
[164](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:164) except BaseException as e:
[165](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:165) run_manager.on_chain_error(e)
--> [166](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:166) raise e
[167](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:167) run_manager.on_chain_end(outputs)
[169](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:169) if include_run_info:
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
[153](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:153) try:
[154](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:154) self._validate_inputs(inputs)
[155](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:155) outputs = (
--> [156](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:156) self._call(inputs, run_manager=run_manager)
[157](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:157) if new_arg_supported
[158](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:158) else self._call(inputs)
[159](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:159) )
[161](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:161) final_outputs: Dict[str, Any] = self.prep_outputs(
[162](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:162) inputs, outputs, return_only_outputs
[163](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:163) )
[164](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:164) except BaseException as e:
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py:137, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
[135](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py:135) # Other keys are assumed to be needed for LLM prediction
[136](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py:136) other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
--> [137](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py:137) output, extra_return_dict = self.combine_docs(
[138](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py:138) docs, callbacks=_run_manager.get_child(), **other_keys
[139](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py:139) )
[140](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py:140) extra_return_dict[self.output_key] = output
[141](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py:141) return extra_return_dict
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py:244, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
[242](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py:242) inputs = self._get_inputs(docs, **kwargs)
[243](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py:243) # Call predict on the LLM.
--> [244](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py:244) return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:316, in LLMChain.predict(self, callbacks, **kwargs)
[301](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:301) def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
[302](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:302) """Format prompt with kwargs and pass to LLM.
[303](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:303)
[304](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:304) Args:
(...)
[314](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:314) completion = llm.predict(adjective="funny")
[315](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:315) """
--> [316](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:316) return self(kwargs, callbacks=callbacks)[self.output_key]
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:180, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
[178](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:178) warned = True
[179](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:179) emit_warning()
--> [180](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:180) return wrapped(*args, **kwargs)
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:383, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[351](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:351) """Execute the chain.
[352](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:352)
[353](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:353) Args:
(...)
[374](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:374) `Chain.output_keys`.
[375](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:375) """
[376](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:376) config = {
[377](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:377) "callbacks": callbacks,
[378](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:378) "tags": tags,
[379](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:379) "metadata": metadata,
[380](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:380) "run_name": run_name,
[381](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:381) }
--> [383](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:383) return self.invoke(
[384](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:384) inputs,
[385](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:385) cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
[386](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:386) return_only_outputs=return_only_outputs,
[387](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:387) include_run_info=include_run_info,
[388](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:388) )
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
[164](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:164) except BaseException as e:
[165](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:165) run_manager.on_chain_error(e)
--> [166](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:166) raise e
[167](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:167) run_manager.on_chain_end(outputs)
[169](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:169) if include_run_info:
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
[153](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:153) try:
[154](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:154) self._validate_inputs(inputs)
[155](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:155) outputs = (
--> [156](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:156) self._call(inputs, run_manager=run_manager)
[157](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:157) if new_arg_supported
[158](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:158) else self._call(inputs)
[159](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:159) )
[161](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:161) final_outputs: Dict[str, Any] = self.prep_outputs(
[162](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:162) inputs, outputs, return_only_outputs
[163](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:163) )
[164](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/base.py:164) except BaseException as e:
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:126, in LLMChain._call(self, inputs, run_manager)
[121](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:121) def _call(
[122](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:122) self,
[123](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:123) inputs: Dict[str, Any],
[124](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:124) run_manager: Optional[CallbackManagerForChainRun] = None,
[125](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:125) ) -> Dict[str, str]:
--> [126](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:126) response = self.generate([inputs], run_manager=run_manager)
[127](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:127) return self.create_outputs(response)[0]
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:138, in LLMChain.generate(self, input_list, run_manager)
[136](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:136) callbacks = run_manager.get_child() if run_manager else None
[137](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:137) if isinstance(self.llm, BaseLanguageModel):
--> [138](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:138) return self.llm.generate_prompt(
[139](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:139) prompts,
[140](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:140) stop,
[141](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:141) callbacks=callbacks,
[142](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:142) **self.llm_kwargs,
[143](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:143) )
[144](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:144) else:
[145](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:145) results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
[146](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:146) cast(List, prompts), {"callbacks": callbacks}
[147](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain/chains/llm.py:147) )
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:777, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[769](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:769) def generate_prompt(
[770](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:770) self,
[771](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:771) prompts: List[PromptValue],
(...)
[774](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:774) **kwargs: Any,
[775](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:775) ) -> LLMResult:
[776](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:776) prompt_messages = [p.to_messages() for p in prompts]
--> [777](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:777) return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:634, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[632](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:632) if run_managers:
[633](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:633) run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> [634](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:634) raise e
[635](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:635) flattened_outputs = [
[636](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:636) LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
[637](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:637) for res in results
[638](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:638) ]
[639](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:639) llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:624, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[621](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:621) for i, m in enumerate(messages):
[622](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:622) try:
[623](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:623) results.append(
--> [624](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:624) self._generate_with_cache(
[625](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:625) m,
[626](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:626) stop=stop,
[627](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:627) run_manager=run_managers[i] if run_managers else None,
[628](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:628) **kwargs,
[629](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:629) )
[630](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:630) )
[631](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:631) except BaseException as e:
[632](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:632) if run_managers:
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:846, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
[844](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:844) else:
[845](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:845) if inspect.signature(self._generate).parameters.get("run_manager"):
--> [846](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:846) result = self._generate(
[847](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:847) messages, stop=stop, run_manager=run_manager, **kwargs
[848](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:848) )
[849](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:849) else:
[850](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:850) result = self._generate(messages, stop=stop, **kwargs)
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:523, in ChatBedrock._generate(self, messages, stop, run_manager, **kwargs)
[521](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:521) system = self.system_prompt_with_tools
[522](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:522) else:
--> [523](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:523) prompt = ChatPromptAdapter.convert_messages_to_prompt(
[524](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:524) provider=provider, messages=messages, model=self._get_model()
[525](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:525) )
[527](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:527) if stop:
[528](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:528) params["stop_sequences"] = stop
File ~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:359, in ChatPromptAdapter.convert_messages_to_prompt(cls, provider, messages, model)
[353](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:353) prompt = convert_messages_to_prompt_anthropic(
[354](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:354) messages=messages,
[355](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:355) human_prompt="\n\nUser:",
[356](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:356) ai_prompt="\n\nBot:",
[357](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:357) )
[358](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:358) else:
--> [359](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:359) raise NotImplementedError(
[360](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:360) f"Provider {provider} model does not support chat."
[361](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:361) )
[362](https://file+.vscode-resource.vscode-cdn.net/Users/anirudh.shrinivason/Downloads/~/miniconda3/envs/pal_test/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock.py:362) return prompt
NotImplementedError: Provider cohere model does not support chat.
```
### Description
I would like to use the cohere models on bedrock. I am able to run the below code properly:
```
import boto3
session = boto3.client(
service_name='bedrock-runtime',
aws_access_key_id='',
aws_secret_access_key='',
region_name='us-east-1'
)
from langchain_aws import ChatBedrock
params = {
"client":session,
"region_name": "us-east-1",
"model_id": "cohere.command-r-plus-v1:0",
"beta_use_converse_api": True
}
model = ChatBedrock(**params)
response = model.invoke("What is Cohere!")
```
So, I am not sure why I am receiving "Cohere model not supported for chat" error, when chat is supported for cohere models on bedrock.
### System Info
aiohappyeyeballs==2.3.4
aiohttp==3.10.0
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.4.0
appnope==0.1.4
asttokens==2.4.1
attrs==24.1.0
beautifulsoup4==4.12.3
black==24.8.0
boto3==1.34.149
botocore==1.34.149
cachetools==5.5.0
certifi==2024.7.4
cfgv==3.4.0
charset-normalizer==3.3.2
click==8.1.7
cohere==5.11.3
comm==0.2.2
dataclasses-json==0.6.7
debugpy==1.8.5
decorator==5.1.1
Deprecated==1.2.14
dirtyjson==1.0.8
distlib==0.3.8
distro==1.9.0
executing==2.1.0
fastavro==1.9.5
filelock==3.15.4
frozenlist==1.4.1
fsspec==2024.6.1
google-api-core==2.23.0
google-api-python-client==2.153.0
google-auth==2.36.0
google-auth-httplib2==0.2.0
google-cloud-core==2.4.1
google-cloud-storage==2.18.2
google-crc32c==1.6.0
google-resumable-media==2.7.2
googleapis-common-protos==1.66.0
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.5
httplib2==0.22.0
httpx==0.27.0
httpx-sse==0.4.0
huggingface-hub==0.24.2
identify==2.6.0
idna==3.7
iniconfig==2.0.0
ipykernel==6.29.5
ipython==8.27.0
ipywidgets==8.1.5
jedi==0.19.1
jmespath==1.0.1
joblib==1.4.2
jsonpatch==1.33
jsonpointer==3.0.0
jupyter_client==8.6.2
jupyter_core==5.7.2
jupyterlab_widgets==3.0.13
langchain==0.2.15
langchain-aws==0.1.17
langchain-cohere==0.2.3
langchain-community==0.2.15
langchain-core==0.2.37
langchain-experimental==0.0.64
langchain-text-splitters==0.2.2
langsmith==0.1.110
markdown-it-py==3.0.0
markdown_pdf==1.3
marshmallow==3.21.3
matplotlib-inline==0.1.7
mdurl==0.1.2
multidict==6.0.5
mypy-extensions==1.0.0
nest-asyncio==1.6.0
networkx==3.3
nltk==3.8.1
nodeenv==1.9.1
numpy==1.26.4
openai==1.38.0
orjson==3.10.7
packaging==24.1
pandas==2.2.2
parameterized==0.9.0
parso==0.8.4
pathspec==0.12.1
pexpect==4.9.0
pillow==10.4.0
platformdirs==4.2.2
pluggy==1.5.0
pre-commit==3.8.0
prompt_toolkit==3.0.47
proto-plus==1.25.0
protobuf==5.28.3
psutil==6.0.0
ptyprocess==0.7.0
pure_eval==0.2.3
pyasn1==0.6.1
pyasn1_modules==0.4.1
pydantic==2.8.2
pydantic_core==2.20.1
Pygments==2.18.0
PyMuPDF==1.24.2
PyMuPDFb==1.24.1
pyparsing==3.2.0
pypdf==5.1.0
pytest==8.2.2
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.1
PyYAML==6.0.1
pyzmq==26.2.0
regex==2024.7.24
requests==2.32.3
rsa==4.9
s3transfer==0.10.2
six==1.16.0
sniffio==1.3.1
soupsieve==2.6
SQLAlchemy==2.0.31
stack-data==0.6.3
tabulate==0.9.0
tenacity==8.5.0
tiktoken==0.7.0
tokenizers==0.19.1
tornado==6.4.1
tqdm==4.66.4
traitlets==5.14.3
types-requests==2.32.0.20240712
typing-inspect==0.9.0
typing_extensions==4.12.2
tzdata==2024.1
uritemplate==4.1.1
urllib3==2.2.2
virtualenv==20.26.3
wcwidth==0.2.13
widgetsnbextension==4.0.13
wikipedia==1.4.0
wrapt==1.16.0
yarl==1.9.4 | 🤖:bug | low | Critical |
2,667,506,428 | next.js | `<Link />` and `redirect` navigation causes route handler being called twice | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/t66xm6
### To Reproduce
1. Start the application in development (`next dev`)
2. Navigate to `/`
3. Click “Link to /route”, “Link to /route without prefetch” or “redirect to /route” to see the current behavior of `route.js` handler
3. Click “Link to /page” or “redirect to /page” to see the behavior of `page.js` **for comparison**
### Current vs. Expected behavior
#### Current behavior
`GET` handler being called twice after one click (`route` is logged twice)
- click on `<Link />`
first has `_rsc` query string and the second does not
- server action `redirect`
first has `rsc: 1` in HTTP header and the second does not
#### Expected behavior
`GET` route handler should be called only once (similar to current behavior of `page.js`) since it's not a React component and there's no need to prefetch
### Provide environment information
```bash
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.4-canary.15 // Latest available version is detected (15.0.4-canary.15).
eslint-config-next: N/A
react: 19.0.0-rc-380f5d67-20241113
react-dom: 19.0.0-rc-380f5d67-20241113
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
Might relate to #57257 but that one is about `page.js` instead of `route.js` handler
I've tested with [v14.2.0-canary.48](https://github.com/vercel/next.js/releases/tag/v14.2.0-canary.48) where that issue is resolved, and the behavior makes no difference to the latest canary version
| bug,Navigation | low | Minor |
2,667,574,106 | deno | @grpc/grpc-js cannot reconnect after server restart | Hello,
I am using `@grpc/grpc-js` in version `1.12.2`.
I am reporting a bug related to this issue on the owner's repo: https://github.com/grpc/grpc-node/issues/2853
Bug does not occur when using Node, only with Deno.
The steps to reproduce would be to make one successful RPC call, then restart the server and try to make another call.
Any idea why this is happening ?
Thanks...
Version: Deno 2.0.6
| bug,node compat | low | Critical |
2,667,592,534 | PowerToys | Enable/ Disable Fancy Zones with hotkey | ### Description of the new feature / enhancement
I would like to be able to enable/ disable the Fancy Zones module through a hotkey.
### Scenario when this would be used?
This is useful when software does not work well with Fancy Zones for its pop-up windows. In that case, the initial windows can be positioned using Fancy Zones, then I would disable it, so I do not get conflicting pop-up positions. Also, it would allow me to switch between Fancy Zones Windows key positioning and the default Windows one.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,667,618,765 | PowerToys | New+ supports generating file names from variables | ### Description of the new feature / enhancement
For example, dates, time stamps, etc.
### Scenario when this would be used?
Since it is a template, it will inevitably need to be renamed to a new version, usually with a date or timestamp suffix to differentiate, if it can be done automatically will be very convenient!
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,667,625,440 | vscode | Regex replace matches without consuming and leaks groups between matches |
Type: <b>Bug</b>
1. Create a new directory
2. Create a text file in it with the contents `abcde=fghijkl=mnopq`
3. Go to the search tab
4. Enable regex
5. Enter replacement `$1 = $2`
6. Enter search term `([^= ])(?:=)([^\n= ])`
When doing this, the replacements end up like this:

I would expect the result to be `abcde = fghijkl = mnopq`.
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Linux x64 6.4.0-150600.23.25-default
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 1700X Eight-Core Processor (16 x 1996)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|1, 2, 2|
|Memory (System)|62.73GB (35.48GB free)|
|Process Argv|--disable-extensions . --crash-reporter-id 3e86f7c6-6835-4d7d-9a0f-6ac8343df962|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|plasma5|
|XDG_CURRENT_DESKTOP|KDE|
|XDG_SESSION_DESKTOP|KDE|
|XDG_SESSION_TYPE|x11|
</details>Extensions disabled<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
vscrpc:30673769
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
bdiig495:31013172
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
impr_priority:31102340
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31134641
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31181875
```
</details>
<!-- generated by issue reporter --> | search,under-discussion | low | Critical |
2,667,634,746 | tauri | [bug] [v2] [android] [windows] Nuxt.js based android app - Cannot redefine property & tauri.localhost | ### Describe the bug
The result is an android emulator with white screen and few errors, the desktop app is working correctly
11-18 01:50:55.088 20184 20184 E Tauri/Console: File: - Line 138 - Msg: Uncaught TypeError: Cannot redefine property: postMessage
11-18 01:50:55.088 20184 20184 E Tauri/Console: File: - Line 2 - Msg: Uncaught TypeError: Cannot redefine property: metadata
11-18 01:50:55.089 20184 20184 E Tauri/Console: File: - Line 25 - Msg: Uncaught TypeError: Cannot redefine property: __TAURI_PATTERN__
11-18 01:50:55.089 20184 20184 E Tauri/Console: File: - Line 5 - Msg: Uncaught TypeError: Cannot redefine property: path
Also I found out that on the android device the tauri.localhost URL is not working with nuxt and returning 500, as the URLs resources are loaded from are incorrect and inputting manually the localhost:3000 moves us further into the process.
### Reproduction
https://github.com/gkkirilov/mobile-test
Clone
npm install
npx tauri android dev
Open web inspector for android from chrome://inspect/#devices
change the URL to localhost:3000
See errors in console
### Expected behavior
To show the content of the app on the android emulator
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.26100 x86_64 (X64)
✔ WebView2: 130.0.2849.80
✔ MSVC:
- Visual Studio Build Tools 2017
- Visual Studio Build Tools 2019
- Visual Studio Build Tools 2022
- Visual Studio Community 2022
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 20.11.1
- yarn: 1.22.21
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.8
- tauri-cli 🦀: 2.1.0
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-log 🦀: 2.0.2
- @tauri-apps/plugin-log : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:3000/
- framework: Vue.js (Nuxt)
- bundler: Webpack
```
### Stack trace
_No response_
### Additional context
Will test on a Mac in the next hour to see if the result is the same | type: bug,status: needs triage | low | Critical |
2,667,752,827 | go | cmd/compile: some codes in DSE rely on the order of values | ### Go version
go version devel go1.24-3ca78afb3b Mon Nov 18 04:56:52 2024 +0000 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GOAMD64='v1'
GOARCH='amd64'
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOOS='linux'
GOVERSION='devel go1.24-3ca78afb3b Mon Nov 18 04:56:52 2024 +0000'
```
### What did you do?
Compile the following code, the first time with no `-gcflags`, the second with `-gcflags='-d=ssa/check/on'`, then compare the assembly code.
```go
func foo(v uint64) (b [8]byte) {
b[0] = byte(v)
b[1] = byte(v >> 8)
b[2] = byte(v >> 16)
b[3] = byte(v >> 24)
b[4] = byte(v >> 32)
b[5] = byte(v >> 40)
b[6] = byte(v >> 48)
b[7] = byte(v >> 56)
return b
}
```
### What did you see happen?
The assembly code generated without `-gcflags`:
```
foo.go:5 0x464fc0 88442408 MOVB AL, 0x8(SP)
foo.go:6 0x464fc4 4889c1 MOVQ AX, CX
foo.go:6 0x464fc7 48c1e808 SHRQ $0x8, AX
foo.go:6 0x464fcb 88442409 MOVB AL, 0x9(SP)
foo.go:7 0x464fcf 4889c8 MOVQ CX, AX
foo.go:7 0x464fd2 48c1e910 SHRQ $0x10, CX
foo.go:7 0x464fd6 884c240a MOVB CL, 0xa(SP)
foo.go:8 0x464fda 4889c1 MOVQ AX, CX
foo.go:8 0x464fdd 48c1e818 SHRQ $0x18, AX
foo.go:8 0x464fe1 8844240b MOVB AL, 0xb(SP)
foo.go:9 0x464fe5 4889c8 MOVQ CX, AX
foo.go:9 0x464fe8 48c1e920 SHRQ $0x20, CX
foo.go:9 0x464fec 884c240c MOVB CL, 0xc(SP)
foo.go:10 0x464ff0 4889c1 MOVQ AX, CX
foo.go:10 0x464ff3 48c1e828 SHRQ $0x28, AX
foo.go:10 0x464ff7 8844240d MOVB AL, 0xd(SP)
foo.go:11 0x464ffb 4889c8 MOVQ CX, AX
foo.go:11 0x464ffe 48c1e930 SHRQ $0x30, CX
foo.go:11 0x465002 884c240e MOVB CL, 0xe(SP)
foo.go:12 0x465006 48c1e838 SHRQ $0x38, AX
foo.go:12 0x46500a 8844240f MOVB AL, 0xf(SP)
foo.go:13 0x46500e c3 RET
```
And the assembly code generated with `-gcflags='-d=ssa/check/on'`:
```
foo.go:4 0x464fc0 48c744240800000000 MOVQ $0x0, 0x8(SP) // the redundant zeroing
foo.go:5 0x464fc9 88442408 MOVB AL, 0x8(SP)
foo.go:6 0x464fcd 4889c1 MOVQ AX, CX
foo.go:6 0x464fd0 48c1e808 SHRQ $0x8, AX
foo.go:6 0x464fd4 88442409 MOVB AL, 0x9(SP)
foo.go:7 0x464fd8 4889c8 MOVQ CX, AX
foo.go:7 0x464fdb 48c1e910 SHRQ $0x10, CX
foo.go:7 0x464fdf 884c240a MOVB CL, 0xa(SP)
foo.go:8 0x464fe3 4889c1 MOVQ AX, CX
foo.go:8 0x464fe6 48c1e818 SHRQ $0x18, AX
foo.go:8 0x464fea 8844240b MOVB AL, 0xb(SP)
foo.go:9 0x464fee 4889c8 MOVQ CX, AX
foo.go:9 0x464ff1 48c1e920 SHRQ $0x20, CX
foo.go:9 0x464ff5 884c240c MOVB CL, 0xc(SP)
foo.go:10 0x464ff9 4889c1 MOVQ AX, CX
foo.go:10 0x464ffc 48c1e828 SHRQ $0x28, AX
foo.go:10 0x465000 8844240d MOVB AL, 0xd(SP)
foo.go:11 0x465004 4889c8 MOVQ CX, AX
foo.go:11 0x465007 48c1e930 SHRQ $0x30, CX
foo.go:11 0x46500b 884c240e MOVB CL, 0xe(SP)
foo.go:12 0x46500f 48c1e838 SHRQ $0x38, AX
foo.go:12 0x465013 8844240f MOVB AL, 0xf(SP)
foo.go:13 0x465017 c3 RET
```
It seems that the redundant zeroing cannot be removed when compiling with `-gcflags='-d=ssa/check/on'`.
### What did you expect to see?
The redundant zeroing can be removed even when compiling with `-gcflags='-d=ssa/check/on'`. | NeedsInvestigation,compiler/runtime | low | Minor |
2,667,772,132 | langchain | with_structured_output not working with OpenAI ChatLiteLLM | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code
```
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.messages import HumanMessage
from pydantic import BaseModel, Field
import os
os.environ["OPENAI_API_KEY"] = "xxx"
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
model = ChatLiteLLM(model="gpt-4o")
structured_llm = model.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")
```
will raise
```
BadRequestError: litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "Invalid value: 'any'. Supported values are: 'none', 'auto', and 'required'.", 'type': 'invalid_request_error', 'param': 'tool_choice', 'code': 'invalid_value'}}
```
### Error Message and Stack Trace (if applicable)
```
[/usr/local/lib/python3.10/dist-packages/litellm/llms/OpenAI/openai.py](https://localhost:8080/#) in completion(self, model_response, timeout, optional_params, logging_obj, model, messages, print_verbose, api_key, api_base, acompletion, litellm_params, logger_fn, headers, custom_prompt_dict, client, organization, custom_llm_provider, drop_params)
789 headers, response = (
--> 790 self.make_sync_openai_chat_completion_request(
791 openai_client=openai_client,
[/usr/local/lib/python3.10/dist-packages/litellm/llms/OpenAI/openai.py](https://localhost:8080/#) in make_sync_openai_chat_completion_request(self, openai_client, data, timeout)
650 else:
--> 651 raise e
652
[/usr/local/lib/python3.10/dist-packages/litellm/llms/OpenAI/openai.py](https://localhost:8080/#) in make_sync_openai_chat_completion_request(self, openai_client, data, timeout)
632 try:
--> 633 raw_response = openai_client.chat.completions.with_raw_response.create(
634 **data, timeout=timeout
[/usr/local/lib/python3.10/dist-packages/openai/_legacy_response.py](https://localhost:8080/#) in wrapped(*args, **kwargs)
355
--> 356 return cast(LegacyAPIResponse[R], func(*args, **kwargs))
357
[/usr/local/lib/python3.10/dist-packages/openai/_utils/_utils.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
274 raise TypeError(msg)
--> 275 return func(*args, **kwargs)
276
[/usr/local/lib/python3.10/dist-packages/openai/resources/chat/completions.py](https://localhost:8080/#) in create(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
828 validate_response_format(response_format)
--> 829 return self._post(
830 "/chat/completions",
[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in post(self, path, cast_to, body, options, files, stream, stream_cls)
1277 )
-> 1278 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
1279
[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in request(self, cast_to, options, remaining_retries, stream, stream_cls)
954
--> 955 return self._request(
956 cast_to=cast_to,
[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in _request(self, cast_to, options, retries_taken, stream, stream_cls)
1058 log.debug("Re-raising status error")
-> 1059 raise self._make_status_error_from_response(err.response) from None
1060
BadRequestError: Error code: 400 - {'error': {'message': "Invalid value: 'any'. Supported values are: 'none', 'auto', and 'required'.", 'type': 'invalid_request_error', 'param': 'tool_choice', 'code': 'invalid_value'}}
During handling of the above exception, another exception occurred:
OpenAIError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/main.py](https://localhost:8080/#) in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, prediction, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
1605 )
-> 1606 raise e
1607
[/usr/local/lib/python3.10/dist-packages/litellm/main.py](https://localhost:8080/#) in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, prediction, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
1578 else:
-> 1579 response = openai_chat_completions.completion(
1580 model=model,
[/usr/local/lib/python3.10/dist-packages/litellm/llms/OpenAI/openai.py](https://localhost:8080/#) in completion(self, model_response, timeout, optional_params, logging_obj, model, messages, print_verbose, api_key, api_base, acompletion, litellm_params, logger_fn, headers, custom_prompt_dict, client, organization, custom_llm_provider, drop_params)
863 error_headers = getattr(error_response, "headers", None)
--> 864 raise OpenAIError(
865 status_code=status_code, message=error_text, headers=error_headers
OpenAIError: Error code: 400 - {'error': {'message': "Invalid value: 'any'. Supported values are: 'none', 'auto', and 'required'.", 'type': 'invalid_request_error', 'param': 'tool_choice', 'code': 'invalid_value'}}
During handling of the above exception, another exception occurred:
BadRequestError Traceback (most recent call last)
[<ipython-input-3-181fee518d8c>](https://localhost:8080/#) in <cell line: 14>()
12 model = ChatLiteLLM(model="gpt-4o")
13 structured_llm = model.with_structured_output(Joke)
---> 14 structured_llm.invoke("Tell me a joke about cats")
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
3020 context.run(_set_config_context, config)
3021 if i == 0:
-> 3022 input = context.run(step.invoke, input, config, **kwargs)
3023 else:
3024 input = context.run(step.invoke, input, config)
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
5352 **kwargs: Optional[Any],
5353 ) -> Output:
-> 5354 return self.bound.invoke(
5355 input,
5356 self._merge_configs(config),
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in invoke(self, input, config, stop, **kwargs)
284 return cast(
285 ChatGeneration,
--> 286 self.generate_prompt(
287 [self._convert_input(input)],
288 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate_prompt(self, prompts, stop, callbacks, **kwargs)
784 ) -> LLMResult:
785 prompt_messages = [p.to_messages() for p in prompts]
--> 786 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
787
788 async def agenerate_prompt(
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
641 if run_managers:
642 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643 raise e
644 flattened_outputs = [
645 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
631 try:
632 results.append(
--> 633 self._generate_with_cache(
634 m,
635 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in _generate_with_cache(self, messages, stop, run_manager, **kwargs)
849 else:
850 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 851 result = self._generate(
852 messages, stop=stop, run_manager=run_manager, **kwargs
853 )
[/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/litellm.py](https://localhost:8080/#) in _generate(self, messages, stop, run_manager, stream, **kwargs)
357 message_dicts, params = self._create_message_dicts(messages, stop)
358 params = {**params, **kwargs}
--> 359 response = self.completion_with_retry(
360 messages=message_dicts, run_manager=run_manager, **params
361 )
[/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/litellm.py](https://localhost:8080/#) in completion_with_retry(self, run_manager, **kwargs)
290 return self.client.completion(**kwargs)
291
--> 292 return _completion_with_retry(**kwargs)
293
294 @pre_init
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in wrapped_f(*args, **kw)
334 copy = self.copy()
335 wrapped_f.statistics = copy.statistics # type: ignore[attr-defined]
--> 336 return copy(f, *args, **kw)
337
338 def retry_with(*args: t.Any, **kwargs: t.Any) -> WrappedFn:
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in __call__(self, fn, *args, **kwargs)
473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
474 while True:
--> 475 do = self.iter(retry_state=retry_state)
476 if isinstance(do, DoAttempt):
477 try:
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in iter(self, retry_state)
374 result = None
375 for action in self.iter_state.actions:
--> 376 result = action(retry_state)
377 return result
378
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in <lambda>(rs)
396 def _post_retry_check_actions(self, retry_state: "RetryCallState") -> None:
397 if not (self.iter_state.is_explicit_retry or self.iter_state.retry_run_result):
--> 398 self._add_action_func(lambda rs: rs.outcome.result())
399 return
400
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
452
453 self._condition.wait(timeout)
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in __get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in __call__(self, fn, *args, **kwargs)
476 if isinstance(do, DoAttempt):
477 try:
--> 478 result = fn(*args, **kwargs)
479 except BaseException: # noqa: B902
480 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
[/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/litellm.py](https://localhost:8080/#) in _completion_with_retry(**kwargs)
288 @retry_decorator
289 def _completion_with_retry(**kwargs: Any) -> Any:
--> 290 return self.client.completion(**kwargs)
291
292 return _completion_with_retry(**kwargs)
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
958 e, traceback_exception, start_time, end_time
959 ) # DO NOT MAKE THREADED - router retry fallback relies on this!
--> 960 raise e
961
962 @wraps(original_function)
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
847 print_verbose(f"Error while checking max token limit: {str(e)}")
848 # MODEL CALL
--> 849 result = original_function(*args, **kwargs)
850 end_time = datetime.datetime.now()
851 if "stream" in kwargs and kwargs["stream"] is True:
[/usr/local/lib/python3.10/dist-packages/litellm/main.py](https://localhost:8080/#) in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, prediction, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
3058 except Exception as e:
3059 ## Map to OpenAI Exception
-> 3060 raise exception_type(
3061 model=model,
3062 custom_llm_provider=custom_llm_provider,
[/usr/local/lib/python3.10/dist-packages/litellm/litellm_core_utils/exception_mapping_utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
2134 if exception_mapping_worked:
2135 setattr(e, "litellm_response_headers", litellm_response_headers)
-> 2136 raise e
2137 else:
2138 for error_type in litellm.LITELLM_EXCEPTION_TYPES:
[/usr/local/lib/python3.10/dist-packages/litellm/litellm_core_utils/exception_mapping_utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
280 ):
281 exception_mapping_worked = True
--> 282 raise BadRequestError(
283 message=f"{exception_provider} - {message}",
284 llm_provider=custom_llm_provider,
BadRequestError: litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "Invalid value: 'any'. Supported values are: 'none', 'auto', and 'required'.", 'type': 'invalid_request_error', 'param': 'tool_choice', 'code': 'invalid_value'}}
```
### Description
I am trying to use `with_structured_output` with ChatLiteLLM for OpenAI models. However, it throws an exception. I believe the error is coming from this **[line](https://github.com/langchain-ai/langchain/blob/76e210a34964aa264bc49aa2b583d725694caf8d/libs/core/langchain_core/language_models/chat_models.py#L1238).** My exception is that it should work and the model outputs structured output.
I have also tried with anthropic models with ChatLiteLLM and it works.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.17
> langchain: 0.3.7
> langchain_community: 0.3.7
> langsmith: 0.1.142
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.11
> packaging: 24.2
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> typing-extensions: 4.12.2 | investigate | low | Critical |
2,667,773,321 | next.js | bug: swc external helper breaking webpack's dependency resolution in next 15 | ### Link to the code that reproduces this issue
https://github.com/rishabh3112/next-15-swc-external-helpers-bug
### To Reproduce
1. Clone reproduction repo
2. `npm install`
3. `npm run dev`
### Current vs. Expected behavior
It should work as it was working in next 14.
#### Current: After webpack parsing with external helper (Next 15):
```
__webpack_require__.r(__webpack_exports__);
/* harmony import */ var _swc_helpers_type_of__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! @swc/helpers/_/_type_of */ "./node_modules/@swc/helpers/esm/_type_of.js");
(function(root, factory) {
if (typeof define === "function" && __webpack_require__.amdO) {
define([
"exports"
], function(exports1) {
root.Backbone = factory(root, exports1);
});
} else if (typeof exports !== "undefined") {
factory(root, exports);
} else {
root.Backbone = factory(root, {});
}
})(window, function(root, Backbone) {
// Removed actual backbone code for minimal reproduction
Backbone.Modal = function() {
var something = root.something;
// So that _type_of swc helper import gets added to the code
if ((typeof something === "undefined" ? "undefined" : (0,_swc_helpers_type_of__WEBPACK_IMPORTED_MODULE_0__._)(something)) === "object") {
return "object";
}
return "don't know";
};
return Backbone;
});
```
SWC playground link: [link](https://play.swc.rs/?version=1.9.2&code=H4sIAAAAAAAAA31RMW7DMBDb8wrWQ%2BoARfKAwEv3LulYFIEjnRM1ts6Q5bpB4L9XsqRAS7vYwJFH8qiyGbWwijVKw2xf0NTCsrltcF8BqkFpbz1xA0mN0oSqqlCknQLrdQS2dSfDDuKk%2FCjop2djh%2BLTyT5s4jCRAe%2B7fa3F9cTeICWIeRJ9v7Dn5T%2BD2oHydJGFJx9v1CGBLJLJ35JRKtD%2BT3KP5qt5U05KS57yswInLQfj3Q4H6vibJJzSWLc4JXHBktCwQae06hxiqDcsx0XNrSah7RtLh1aZUzpKsB4sBu7IXpQ%2BO86S%2FzEIjbkM7wx7qS2Ovqyja2uYBC7U9mSgOt8FzuTKq6V0Sa1nh4CLQNZy5uV75tMXCVtkL0l2NPoBxCdb5ZBk%2FWxx1Twt8Ow%2FEUsn713B%2B1%2BA%2B%2BJUmAIAAA%3D%3D&config=H4sIAAAAAAAAA22PSw7CMAxE95wi8potLDgBGw4RBbcKyk%2B2K7WqeneSNCkgsYtn3mTs9aQUvNjATa35mYekiZGOOSu8BNFzVkCWhGzIJoFzd4WLNWjHWKVtd0A0jSglhXxpOLgYGTveNG%2BDHZbvQhN9ImT%2BBQuqw%2Bjwbx3OghS0u6NLSCUqNBWkAuDjc6rJdmW5ZN%2FtCh%2Bob3K0guVHT9b%2FtjeNDDPTMQEAAA%3D%3D)
#### Expected: After webpack parsing without external helper (Next 14):
```
var __WEBPACK_AMD_DEFINE_ARRAY__, __WEBPACK_AMD_DEFINE_RESULT__;(function(root, factory) {
if (true) {
!(__WEBPACK_AMD_DEFINE_ARRAY__ = [
exports
], __WEBPACK_AMD_DEFINE_RESULT__ = (function(exports1) {
root.Backbone = factory(root, exports1);
}).apply(exports, __WEBPACK_AMD_DEFINE_ARRAY__),
__WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__));
} else {}
})(window, function(root, Backbone) {
// Removed actual backbone code for minimal reproduction
Backbone.Modal = function() {
var something = root.something;
// So that _type_of swc helper import gets added to the code
if (typeof something === "object") {
return "object";
}
return "don't know";
};
return Backbone;
});
```
SWC playground link: [link](https://play.swc.rs/?version=1.5.7&code=H4sIAAAAAAAAA31RMW7DMBDb8wrWQ%2BoARfKAwEv3LulYFIEjnRM1ts6Q5bpB4L9XsqRAS7vYwJFH8qiyGbWwijVKw2xf0NTCsrltcF8BqkFpbz1xA0mN0oSqqlCknQLrdQS2dSfDDuKk%2FCjop2djh%2BLTyT5s4jCRAe%2B7fa3F9cTeICWIeRJ9v7Dn5T%2BD2oHydJGFJx9v1CGBLJLJ35JRKtD%2BT3KP5qt5U05KS57yswInLQfj3Q4H6vibJJzSWLc4JXHBktCwQae06hxiqDcsx0XNrSah7RtLh1aZUzpKsB4sBu7IXpQ%2BO86S%2FzEIjbkM7wx7qS2Ovqyja2uYBC7U9mSgOt8FzuTKq6V0Sa1nh4CLQNZy5uV75tMXCVtkL0l2NPoBxCdb5ZBk%2FWxx1Twt8Ow%2FEUsn713B%2B1%2BA%2B%2BJUmAIAAA%3D%3D&config=H4sIAAAAAAAAA22PSw7CMAxE95yi8potLDgBGw5hBRelyk%2B2K7WqeneStCkgsYtn3mTs5dR1MIiBW7fkZx4SshAfc1ZkDopTVoCMRzFsk8K5uYMUq0cnVKV1c0CRX6Q1JZcdBxejUMN3zdtg%2B%2Fm70ESfmER%2BwYJieDn6W0eTEgd0d3KJuESVx4JUAHx8jjW5X6lzom23K3ygtsnRClYeLVn%2FW98mTB8tMQEAAA%3D%3D)
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 21.6.0: Wed Apr 24 06:05:14 PDT 2024; root:xnu-8020.240.18.708.4~1/RELEASE_ARM64_T6000
Available memory (MB): 65536
Available CPU cores: 10
Binaries:
Node: 18.18.2
npm: 9.8.1
Yarn: 1.22.22
pnpm: 9.12.2
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
SWC, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Other (Deployed)
### Additional context
It should work like it was working in next 14.
Willing to contribute. | bug,SWC,Webpack | low | Critical |
2,667,794,350 | kubernetes | DRA: DRAResourceClaimDeviceStatus API cleanup | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/pull/128240 added claim.status.devices. Some cleanup of the API would be useful.
- Fix inconsistency:
```console
$ diff -c pkg/apis/resource/types.go <(sed -e 's;`json:",inline.*;// inline;' -e 's;metav1.TypeMeta // inline;metav1.TypeMeta;' -e 's; `json.*;;' -e 's/v1\./core./' -e's/metacore./metav1./' -e 's/package v1beta1/package resource/' staging/src/k8s.io/api/resource/v1beta1/types.go | grep -v -e '+genclient' -e '+k8s:prerelease-lifecycle-gen' )
*** pkg/apis/resource/types.go 2024-11-18 10:12:21.388362592 +0100
--- /dev/fd/63 2024-11-18 10:17:52.573838086 +0100
...
***************
*** 1020,1027 ****
// If the device has been configured according to the class and claim
// config references, the `Ready` condition should be True.
//
- // Must not contain more than 8 entries.
- //
// +optional
// +listType=map
// +listMapKey=type
--- 1020,1025 ----
***************
*** 1059,1066 ****
// associated subnet mask.
// e.g.: "192.0.2.5/24" for IPv4 and "2001:db8::5/64" for IPv6.
//
- // Must not contain more than 16 entries.
- //
// +optional
// +listType=atomic
IPs []string
--- 1057,1062 ----
```
- Add named constants for size limits (this number of entries of `IPs`, but also everything else checked by validation).
- Verify that `claim.status.devices.data` really is omitted when not set. It probably needs to be changed to `Data *runtime.RawExtension`.
/assign @LionelJouin
/sig node
/wg device-management
### Why is this needed?
Consistency.
| sig/node,kind/feature,needs-triage,wg/device-management | low | Major |
2,667,798,557 | godot | Free a node which has a running Tween or an awaited coroutine method will cause Memory leak | ### Tested versions
- Reprocucible in 4.4.dev 4
### System information
Windows 11, Vulkan API 1.3.280 - Forward+ - Using Vulkan Device #0: NVIDIA - NVIDIA GeForce RTX 3060 Ti
### Issue description
If a Tween is running but free it's bound Node midway, will cause memory leak.
If await a Node's coroutine but free this node midway, will cause memory leak.
Here is an expample.

A diff between two outputs of `get_all_objects_id` which needs a custom build of Godot:

In foo.tscn's script foo.gd:
```
extends Node2D
func test():
printt("foo test 1")
var tween = create_tween()
tween.tween_interval(1)
await tween.finished
printt("foo test 2")
return 1
```
In main.tscn's script main.gd:
```
extends Node2D
const FOO = preload("res://foo.tscn")
var result
func _ready() -> void:
# Objects's count increasing at the speed of 1 Object / 5 seconds
var tween = create_tween()
tween.set_loops(-1)
tween.tween_callback(go).set_delay(5)
# Objects's count increasing at the speed of 3 Object / 5 seconds.
#var tween = create_tween()
#tween.set_loops(-1)
#tween.tween_callback(go2).set_delay(5)
# these codes print all objects id if you can compile a custom version Godot.
# these codes can remain commented if you don't want to know what Object is leaking.
#var tween2 = create_tween()
#tween2.set_loops(-1)
#tween2.tween_callback(get_all_objects_id).set_delay(5.5)
## can observe an increasing of Objects' count at a speed of 1 Object per 5 seconds.
## The leaking object is a Tween created by foo.
func go() -> void:
var foo = FOO.instantiate()
add_child(foo)
# 0.5 second later, free foo to interrupt foo's awaiting.
# can observe an increasing of Object's count.
# Which cusing 1 Tween Object leaking.
var tween = create_tween()
tween.tween_callback(Callable(foo, "queue_free")).set_delay(0.5)
# let foo to await a 1-second-timer's time out.
foo.test()
## can observe an increasing of Objects' count at a speed of 3 Objects per 5 seconds.
## The leaking objects are 1 Tween created by foo and 2 GDScriptFunctionState.
func go2() -> void:
var foo = FOO.instantiate()
add_child(foo)
# 0.5 second later, free foo to interrupt foo's awaiting.
# can observe an increasing of Object's count.
# Which cusing 1 Tween and 2 GDScriptFunctionState Objects leaking.
var tween = create_tween()
tween.tween_callback(Callable(foo, "queue_free")).set_delay(0.5)
# let foo to await a 1-second-timer's time out
result = await foo.test()
```
C++ modification if you want to know which object is leaking:
In object.h:
```
TypedArray<uint64_t> get_all_objects_id();
```
In object.cpp:
```
void Object::notify_property_list_changed() {
emit_signal(CoreStringName(property_list_changed));
}
static String _get_element_type(Variant::Type builtin_type, const StringName &native_type, const Ref<Script> &script_type) {
if (script_type.is_valid() && script_type->is_valid()) {
return GDScript::debug_get_script_name(script_type);
} else if (native_type != StringName()) {
return native_type.operator String();
} else {
return Variant::get_type_name(builtin_type);
}
}
static String _get_var_type(Object *p_var) {
String basestr;
bool was_freed;
Object *bobj = p_var;
if (bobj->is_class_ptr(GDScriptNativeClass::get_class_ptr_static())) {
basestr = Object::cast_to<GDScriptNativeClass>(bobj)->get_name();
} else {
basestr = bobj->get_class();
if (bobj->get_script_instance()) {
basestr += " (" + GDScript::debug_get_script_name(bobj->get_script_instance()->get_script()) + ")";
}
}
return basestr;
}
static TypedArray<uint64_t> all_objects_id;
static void _get_obj_id(Object *p_obj) {
all_objects_id.push_back(p_obj->get_instance_id());
String basestr = _get_var_type(p_obj);
print_line("xxxx uitos:" + uitos(p_obj->get_instance_id()) + " " + " itos:" + itos(p_obj->get_instance_id()) + " " + basestr);
}
TypedArray<uint64_t> Object::get_all_objects_id() {
all_objects_id.clear();
ObjectDB::debug_objects(_get_obj_id);
return all_objects_id;
}
void Object::_bind_methods() {
ClassDB::bind_method(D_METHOD("get_all_objects_id"), &Object::get_all_objects_id);
```

### Steps to reproduce
1. run MRP and see the figure of Monitor's Object count
2. comment 3-line of `tween_callback(go)` and uncomment 3-line of `tween_callback(go2)` and re-run.
### Minimal reproduction project (MRP)
[test_gdscriptfunctionstate.zip](https://github.com/user-attachments/files/17797845/test_gdscriptfunctionstate.zip)
| bug,topic:core | low | Critical |
2,667,813,509 | react-native | when upgrading to react native version 0.76.1 ios/Pods/RCT-Folly/folly/functional/Invoke.h:22:10: 'boost/preprocessor/control/expr_iif.hpp' file not found 22 | #include <boost/preprocessor/control/expr_iif.hpp>. when run react-native run-ios | ### Description
when upgrading to react native version 0.76.1 The build doesn't work, it seems there are missing files in some Pod dependencies.
### Steps to reproduce
upgrading react native from 0.72.4 to 0.76.1 and using same steps in https://react-native-community.github.io/upgrade-helper/?from=0.72.4&to=0.76.1
### React Native Version
0.76.1
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1
CPU: (11) arm64 Apple M3 Pro
Memory: 121.94 MB / 18.00 GB
Shell:
version: 3.2.57
path: /bin/bash
Binaries:
Node:
version: 18.20.4
path: ~/.nvm/versions/node/v18.20.4/bin/node
Yarn:
version: 1.22.22
path: /usr/local/bin/yarn
npm:
version: 10.7.0
path: ~/.nvm/versions/node/v18.20.4/bin/npm
Watchman:
version: 2024.11.11.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/tamimiejada/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK: Not Found
IDEs:
Android Studio: 2021.3 AI-213.7172.25.2113.9123335
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /Users/tamimiejada/.jenv/shims/javac
Ruby:
version: 3.1.2
path: /Users/tamimiejada/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react: Not Found
react-native: Not Found
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
[React-perflogger] Compiling FuseboxTracer.cpp
❌ path-to-my-project/ios/Pods/RCT-Folly/folly/functional/Invoke.h:22:10: 'boost/preprocessor/control/expr_iif.hpp' file not found
22 | #include <boost/preprocessor/control/expr_iif.hpp>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error Failed to build ios project. "xcodebuild" exited with error code '65'. To debug build logs further, consider building your app with Xcode.app, by opening 'myProject.xcworkspace'.
```
### Reproducer
https://github.com/facebook/react-native
### Screenshots and Videos
_No response_ | Platform: iOS,Needs: Repro,Newer Patch Available,Needs: Attention | low | Critical |
2,667,896,854 | vscode | Updates not being installed automatically |
Type: <b>Bug</b>
I am having manually install all the updates.
VS Code version: Code 1.95.1 (65edc4939843c90c34d61f4ce11704f09d3e5cb6, 2024-10-31T05:14:54.222Z)
OS version: Linux x64 6.8.0-48-generic
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 4700U with Radeon Graphics (8 x 3891)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|4, 4, 4|
|Memory (System)|30.75GB (10.95GB free)|
|Process Argv|--crash-reporter-id c456e7cc-403b-4b0c-be5f-c2666a60695d|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|ubuntu|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu|
|XDG_SESSION_TYPE|x11|
</details><details><summary>Extensions (9)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-intelephense-client|bme|1.12.6
browserstack-vscode|bro|1.2.4
githistory|don|0.6.20
gitlens|eam|16.0.1
vscode-mysql|for|0.5.0
mysql-syntax|jak|1.3.1
php-cs-fixer|jun|0.3.21
vscode-docker|ms-|1.29.3
remote-containers|ms-|0.388.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
724cj586:31013169
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl2:31139839
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31181875
```
</details>
<!-- generated by issue reporter --> | bug,install-update | low | Critical |
2,667,919,771 | vscode | SCM - Add margin in file tree | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Please add more margin (5px for example) in file tree
Because without a margin it is too close to a vertical line, which is not very aesthetic
 | bug,ux,scm | low | Minor |
2,667,966,010 | godot | `AudioStreamInteractive` breaks when modified while already playing | ### Tested versions
Reproducible in:
- v4.3.stable [506d6e4]
- v4.4.dev [5efd124ca]
### System information
Nobara Linux 40 (KDE Plasma) on Wayland
### Issue description
`AudioStreamInteractive` can be modified at runtime by increasing or decreasing `clip_count` and using `set_clip_stream()`.
I ran into two problems specifically while the stream is already being played:
- Setting the stream of a clip that already exists - even if it is not playing - will cause any transition to this clip to now be silent.
- Adding a new clip by modifying `clip_count` and using `set_clip_stream()` *works* but it ends up causing this error when switching to the new clip:
```
E 0:00:02:0052 _queue: Condition "states[p_to_clip_index].playback.is_null()" is true.
<C++ Source> modules/interactive_music/audio_stream_interactive.cpp:605 @ _queue()
```
presumably this is because a clip added this way is not initialized until `start()` in audio_stream_interactive.cpp is called.
This may be a documentation issue or intended, however, there are certain checks in place for when `clip_count` is changed.
For example: `version` being incremented and used to stop all streams if mismatched and the stream being stopped if `clip_count` was decreased.
This can be worked around by restarting the stream and getting a new playback instance but this may not always be desirable.
### Steps to reproduce
1. Open the MRP
2. Start the game
3. Press '1' to add a new clip (Error)
Press '2' to add a new clip and restart (No error)
Press '3' to replace clip 2's stream and switch to it (Silent)
Press '4' to switch to clip 2 only (Not silent)
### Minimal reproduction project (MRP)
[MRP](https://github.com/user-attachments/files/17798478/Archive.zip)
| bug,topic:audio | low | Critical |
2,668,010,253 | next.js | URL.parse() not polyfilled | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/determined-knuth-87352t?file=%2Fapp%2Fpage.tsx%3A3%2C16
### To Reproduce
1. Run or build the application
2. Get error
### Current vs. Expected behavior
Expected result is no error
Actual result is a runtime/buildtime error
> npm run dev
```
GET / 500 in 4185ms
⨯ app/page.tsx (3:28) @ parse
⨯ TypeError: URL.parse is not a function
at Home (./app/page.tsx:9:30)
at AsyncLocalStorage.run (node:async_hooks:346:14)
at stringify (<anonymous>)
at AsyncLocalStorage.run (node:async_hooks:346:14)
at AsyncResource.runInAsyncScope (node:async_hooks:206:9)
digest: "779954570"
1 | /** Add your relevant code here for the issue to reproduce */
2 | export default function Home() {
> 3 | const { hostname } = URL.parse("https://vercel.com");
| ^
4 |
5 | return <h1>{ hostname }</h1>;
6 | }
```
> npm run build
```
Failed to compile.
./app/page.tsx:3:28
Type error: Property 'parse' does not exist on type '{ new (url: string | URL, base?: string | URL): URL; prototype: URL; canParse(url: string | URL, base?: string): boolean; createObjectURL(obj: Blob | MediaSource): string; revokeObjectURL(url: string): void; }'.
1 | /** Add your relevant code here for the issue to reproduce */
2 | export default function Home() {
> 3 | const { hostname } = URL.parse("https://vercel.com");
| ^
4 |
5 | return <h1>{ hostname }</h1>;
6 | }
```
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.4-canary.17 // Latest available version is detected (15.0.4-canary.17).
eslint-config-next: N/A
react: 19.0.0-rc-380f5d67-20241113
react-dom: 19.0.0-rc-380f5d67-20241113
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed)
### Additional context
`URL.parse()` is a static method on the `URL` class supported on all major platforms, but the Next.js polyfill clobbers it on both server and client - only the `canParse()` method is included, even though core-js has a polyfill for `parse()` as well.
https://github.com/vercel/next.js/blob/c50109b4468bb274c29dfc0eb5aaa906736dc4d9/packages/next-polyfill-nomodule/src/index.js#L52
Additionally, **Safari does not support this method until v18** (released two months ago). Found this out when I started to get runtime errors on my phone on a Vercel production deployment.
Node v20 LTS - https://nodejs.org/docs/latest-v20.x/api/url.html#urlparseinput-base
MDN - https://developer.mozilla.org/en-US/docs/Web/API/URL/parse_static
Can I Use? - https://caniuse.com/mdn-api_url_parse_static
It's been added in Node v22 and backported to LTS. Here's an example from my dev environment that shows it working:
```
nes@vm:~$ node --version
v20.18.0
nes@vm:~$ node
Welcome to Node.js v20.18.0.
Type ".help" for more information.
> URL.parse("https://vercel.com")
URL {
href: 'https://vercel.com/',
origin: 'https://vercel.com',
protocol: 'https:',
username: '',
password: '',
host: 'vercel.com',
hostname: 'vercel.com',
port: '',
pathname: '/',
search: '',
searchParams: URLSearchParams {},
hash: ''
}
``` | bug,Runtime,linear: next | low | Critical |
2,668,032,496 | langchain | DOC: Add support for custom user agent in the tutorial for openai | ### URL
https://python.langchain.com/docs/integrations/llms/openai/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
There is no explicit section in this documentation about how to override the default openai client to put custom user agent information. This is problematic because from an analytics perspective langchain shows up nowhere in our logs. Purely from analytics data, we cannot see if our customers are using langchain.
### Idea or request for content:
Either add a default langchain custom user agent that says something like:
`User-Agent: langchain/v1.2.3 openai/python v1.2.3`
that would be helpful to show up that langchain is used and this can be proved.
or add in the documentation a way to add a custom User-Agent that would be weaker because it would suppose that all customer will add this header but it is better than nothing :) | 🤖:docs | low | Minor |
2,668,109,328 | material-ui | Allow overriding props in styled | ### Summary
When created reusable components, it is often very helpful to pass default props to the component.
Adding `props` to `interface MuiStyledOptions` will allow the users to set these props.
### Examples
One use case:
Creating a reusable outlined button with extra css.
Instead of doing:
```
export const FullWidthOutlinedButton = styled(Button)(({ theme }) => ({
width: '100%',
marginTop: theme.spacing(5),
}))
```
and always remembering to add the variant when using the component, we can do this:
```
export const FullWidthOutlinedButton = styled(Button, {props: {varient: "outlined"}})(({ theme }) => ({
width: '100%',
marginTop: theme.spacing(5),
}))
```
I know the css here is generic but the possibilities are endless
### Motivation
Improving DX when creating reusable components, and minimizing human error
**Search keywords**: props defaults custom | new feature,package: system | low | Critical |
2,668,139,113 | PowerToys | [New+] Only up to 16 context items are shown. Remaining are missing. | ### Microsoft PowerToys version
0.86.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
New+
### Steps to reproduce
Extract template files [Template.zip] in template folder and open New+ context menu.
[Templates.zip](https://github.com/user-attachments/files/17798980/Templates.zip)
[PowerToysReport_2024-11-18-11-39-03.zip](https://github.com/user-attachments/files/17798950/PowerToysReport_2024-11-18-11-39-03.zip)
### ✔️ Expected Behavior
Showing all items that reside in the template folder.

### ❌ Actual Behavior
There are up to 16 items shown. If 20 items are in the template folder, 4 are not shown in the context menu depending on how the items in the template folder are initially sorted.

### Other Software
[Folcolor 1.2.0](https://github.com/kweatherman/Folcolor) (not directly interacting with the software) | Issue-Bug,Needs-Triage | low | Minor |
2,668,164,808 | kubernetes | Conflicting topologySpreadConstraints, podManagementPolicy: OrderedReady and PVCs can lead to unschedulable pods in StatefulSets | ### What happened?
I honestly don't know if this is a real bug or not, since it seems like the configuration conflicts with itself. I thought I'd file a bug and see what sig-scheduling says, since the behaviour isn't obvious to the user until the situation occurs.
With a StatefulSet it's possible to end up in a situation (after an outage) where some of these Pods are unable to schedule, if the StatefulSet has the following rules:
1. 5 replicas
2. A volumeClaimTemplates
3. Across 3 zones
4. `podManagementPolicy: OrderedReady`
5. A topologySpreadConstraints matching `topologyKey: topology.kubernetes.io/zone` with `maxSkew: 1`
Example StatefulSet below.
When deploying a StatefulSet with this configuration, if 2 Pods are lost, one in the single-pod-zone and another Pod with a lower number, then the StatefulSet is stuck can't replace those Pods due to a conflict of scheduling rules.
Let me try explain with some diagrams.
---
After applying the example StatefulSet, the Pods land in a configuration as such:

(Note, each Pod also has a related PVC with it)
This makes sense with all the rules configured and all Pods are healthy.
When `pod-1` and `pod-0` are deleted (due to node failures) we end up with this situation:

The StatefulSet Controller will then create `pod-0` (since `OrderedReady` is configured), but it will try be placed into the `Zone 2` zone (Since we have a maxSkew 1 topologySpreadConstraints zone policy).
However, the PVC for `pod-0` is in Zone 1, meaning that the Pod can never be scheduled. This stops the StatefulSet controller from continuing to `pod-1`.
It seems that the only way to fix this is to temporarily set `maxSkew` to 2 for the topologySpreadConstraints zone policy, or to delete the PVC in Zone 1
### What did you expect to happen?
It's difficult to way what I expect to happen, because the only way to schedule a Pod is to override the rule that I set as the operator.
Either schedule `pod-0` in the correct zone (bypassing the topologySpreadConstraints zone policy)
Or schedule `pod-1` first (bypassing `OrderedReady`.
May be this situation is an exception for `OrderedReady`, since the StatefulSet isn't a new one, and is only recovering from a failure.
### How can we reproduce it (as minimally and precisely as possible)?
```
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: testing
spec:
persistentVolumeClaimRetentionPolicy:
whenDeleted: Retain
whenScaled: Retain
podManagementPolicy: OrderedReady
replicas: 5
revisionHistoryLimit: 4
selector:
matchLabels:
app.kubernetes.io/name: testing
serviceName: ""
template:
metadata:
labels:
app.kubernetes.io/name: testing
spec:
automountServiceAccountToken: true
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mount
name: testing-pvc
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
topologySpreadConstraints:
- labelSelector:
matchLabels:
app.kubernetes.io/name: testing
maxSkew: 1
nodeTaintsPolicy: Honor
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
- labelSelector:
matchLabels:
app.kubernetes.io/name: testing
maxSkew: 1
nodeTaintsPolicy: Honor
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: testing-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
volumeMode: Filesystem
status:
phase: Pending
```
### Anything else we need to know?
Some context of the situation we're in:
1. We're using Ordered Ready, since the [vault-operator](https://github.com/bank-vaults/vault-operator/blob/v1.22.3/pkg/controller/vault/vault_controller.go#L1344-L1347) we use has that hardcoded
2. We want 5 replicas of Vault, due to the [recommendation from Hashicorp](https://developer.hashicorp.com/vault/docs/internals/integrated-storage#quorum-size-and-failure-tolerance).
I imagine if this isn't a Kubernetes bug, I should ask the vault-operator project to allow configurable `podManagementPolicy`, to avoid this situation from happening, however, I assume other users have similar constraints, and somehow fixing it in Kubernetes may be useful.
Some changes we considered:
1. Increasing replicas to 6. This should solve the problem, but conflicts with Hashicorp's suggestion of [running 5 replicas](https://developer.hashicorp.com/vault/docs/internals/integrated-storage#quorum-size-and-failure-tolerance).
2. Change maxSkew to 2. This actually helps un-stuck the pods, but on initial deployment it won't guarantee even spread across all AZs.
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.1
Kustomize Version: v5.4.2
Server Version: v1.30.2```
</details>
### Cloud provider
<details>
AWS via kOps
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
/sig scheduling | kind/bug,sig/scheduling,needs-triage | low | Critical |
2,668,334,606 | angular | The documentation for ng serve proxy is incomplete | ### Describe the problem that you experienced
This part of documentation https://angular.dev/tools/cli/serve#proxying-to-a-backend-server is incomplete or maybe incorrect. It indicates that Webpack server is used for development and has links to Webpack documentation. However, newer versions of angular seem to be using Vite together with esbuild target. Because of this our proxy config did not work since we used Webpack specific syntax (namely bare '*'). Only after discovering it is actually using Vite we could fix the problem, but lost almost half a day on it. As such, it would be very helpful to explain that Vite is used with esbuild (which is anyway the default for new projects) and provide links to its proxy config.
### Enter the URL of the topic with the problem
https://angular.dev/tools/cli/serve#proxying-to-a-backend-server
### Describe what you were looking for in the documentation
Trying to figure out why proxying is not working.
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
Update the documentation with explanation that it is using Vite along with appropriate links to its proxy documentation
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
``` | help wanted,good first issue,area: docs | low | Critical |
2,668,353,509 | pytorch | [export][quant] AttributeError: 'FunctionalTensor' object has no attribute '_quantized_linear_op' | ### 🐛 Describe the bug
I'm trying to export a model to use w8/a8 quantisation.
In my example, I'm using BERT. However running `torch.export` fails with `AttributeError: 'FunctionalTensor' object has no attribute '_quantized_linear_op'`
A traceback is here:
<details><summary>Traceback</summary>
<p>
```
Traceback (most recent call last):
File "/app/examples/generate_bert_int8.py", line 57, in <module>
exported = export(quantized_model, example_inputs, strict=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/export/__init__.py", line 174, in export
return _export(
^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/export/_trace.py", line 945, in wrapper
raise e
File "/home/user/.local/lib/python3.11/site-packages/torch/export/_trace.py", line 928, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/export/exported_program.py", line 89, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/export/_trace.py", line 1455, in _export
aten_export_artifact = export_func(
^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/export/_trace.py", line 1317, in _non_strict_export
aten_export_artifact = _export_to_aten_ir(
^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/export/_trace.py", line 583, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/export/_trace.py", line 1268, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1131, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1350, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 562, in create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 163, in inner
flat_f_outs = f(*flat_f_args)
^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 178, in flat_fn
tree_out = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 748, in functional_call
out = mod(*args[params_len:], **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/export/_trace.py", line 1255, in forward
tree_out = self._export_root(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py", line 1668, in forward
outputs = self.bert(
^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py", line 1142, in forward
encoder_outputs = self.encoder(
^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py", line 695, in forward
layer_outputs = layer_module(
^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py", line 585, in forward
self_attention_outputs = self.attention(
^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py", line 515, in forward
self_outputs = self.self(
^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py", line 265, in forward
mixed_query_layer = self.query(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 117, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torchao/utils.py", line 374, in _dispatch__torch_function__
return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torchao/utils.py", line 357, in wrapper
return func(f, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torchao/quantization/linear_activation_quantized_tensor.py", line 102, in _
return weight_tensor._quantized_linear_op(input_tensor, weight_tensor, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torchao/quantization/linear_activation_quantized_tensor.py", line 73, in _quantized_linear_op
return torch.nn.functional.linear(aqt, original_weight_tensor, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torchao/utils.py", line 374, in _dispatch__torch_function__
return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torchao/utils.py", line 357, in wrapper
return func(f, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torchao/dtypes/affine_quantized_tensor.py", line 1754, in _
return weight_tensor._quantized_linear_op(input_tensor, weight_tensor, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'FunctionalTensor' object has no attribute '_quantized_linear_op'
```
</p>
</details>
My code is here:
```python
#!/usr/bin/env python3
import torch
from transformers import (
TorchAoConfig,
AutoTokenizer,
AutoModelForSequenceClassification,
)
from torch.export import export
import torch.utils.benchmark as benchmark
# Load BERT tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
text_1 = "Who was Jim Henson?"
text_2 = "Jim Henson was a puppeteer"
# Tokenize inputs
inputs = tokenizer(text_1, text_2, return_tensors="pt", padding=True, truncation=True)
# Load BERT model with quantization configuration
quantization_config = TorchAoConfig("int8_dynamic_activation_int8_weight")
quantized_model = AutoModelForSequenceClassification.from_pretrained(
"bert-base-uncased", quantization_config=quantization_config
)
# Perform inference
with torch.no_grad():
outputs = quantized_model(**inputs)
# Print logits
logits = outputs.logits
print(f"Logits: {logits}")
# Benchmark the performance
def benchmark_fn(f, *args, **kwargs):
# Manual warmup
for _ in range(5):
f(*args, **kwargs)
t0 = benchmark.Timer(
stmt="f(*args, **kwargs)",
globals={"args": args, "kwargs": kwargs, "f": f},
num_threads=torch.get_num_threads(),
)
return f"{(t0.blocked_autorange().mean):.3f} seconds"
print(
"Quantized model inference time:",
benchmark_fn(quantized_model, **inputs),
)
example_inputs = (inputs["input_ids"], inputs["attention_mask"])
exported = export(quantized_model, example_inputs, strict=False)
```
### Versions
```
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.1
[pip3] torch-xla==2.4.0
[pip3] torchao==0.6.1
[pip3] triton==3.0.0
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.