id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,551,805,813 | pytorch | Aborted (core dumped) in `torch.cuda.caching_allocator_delete` | ### ๐ Describe the bug
torch.cuda.caching_allocator_delete failed to release the address and triggered a crash.
You may need to run the code on colab several times and make sure the gpu is available to trigger a crash.
minimal example:
```
import torch
device = torch.device("cuda")
tensor = torch.randn(10,dtype=torch.float64 , device=device)
mem_ptr = tensor.data_ptr()
torch.cuda.caching_allocator_delete(mem_ptr)
```
output:
```
Aborted (core dumped)
```
Furthermore, "please report a bug to PyTorch" is triggered when applying on a non-existent GPu device, maybe this output text needs to be improved.
minimal example:
```
import torch
device = torch.device("cuda")
tensor = torch.zeros(10, device=device)
mem_ptr = tensor.data_ptr()
torch.cuda.caching_allocator_alloc(1, 10)
```
output:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-3-e4a0b322c4ed>](https://localhost:8080/#) in <cell line: 5>()
3 tensor = torch.zeros(10, device=device)
4 mem_ptr = tensor.data_ptr()
----> 5 torch.cuda.caching_allocator_alloc(1, 10)
1 frames
[/usr/local/lib/python3.10/dist-packages/torch/cuda/__init__.py](https://localhost:8080/#) in current_stream(device)
916 """
917 _lazy_init()
--> 918 streamdata = torch._C._cuda_getCurrentStream(
919 _get_device_index(device, optional=True)
920 )
RuntimeError: device_index >= 0 && device_index < num_gpus INTERNAL ASSERT FAILED at "../c10/cuda/CUDAStream.cpp":247, please report a bug to PyTorch.
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ptrblck @msaroufim | module: cuda,triaged,module: CUDACachingAllocator | low | Critical |
2,551,832,924 | pytorch | Segmentation fault (core dumped) in `torch.profiler.profile` | ### ๐ Describe the bug
Under specific inputs, torch.profiler.profile triggered a crash.
minimal example:
```
import torch
import torch.profiler
model = torch.nn.Sequential(
torch.nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2, stride=2),
torch.nn.Flatten(),
torch.nn.Linear(64 * 16 * 16, 10),
)
inputs = torch.randn(5, 3, 224, 224)
# Initialize profiler with CPU and CUDA activities
prof = torch.profiler.profile(
activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA],
with_stack=True,
)
# Start profiling
prof.start()
output = model(inputs)
prof.stop()
prof.export_chrome_trace("profiling_trace.json")
```
output:
```
Traceback (most recent call last):
File "/home/work/mannul/pytorch/torch.profiler.profile.py", line 20, in <module>
output = model(inputs)
File "/home/miniconda3/envs/torch2.4.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/miniconda3/envs/torch2.4.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/miniconda3/envs/torch2.4.1/lib/python3.9/site-packages/torch/nn/modules/container.py", line 219, in forward
input = module(input)
File "/home/miniconda3/envs/torch2.4.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/miniconda3/envs/torch2.4.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/miniconda3/envs/torch2.4.1/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 117, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (5x802816 and 16384x10)
[W927 09:56:14.945580741 profiler_python.cpp:825] Warning: `PythonTracer::stop()` was not called. (function ~PythonTracer)
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @robieta @chaekit @aaronenyeshi @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | high priority,triage review,module: crash,oncall: profiler | low | Critical |
2,551,844,697 | pytorch | Aborted (core dumped) in `torch.smm`/`torch.hspmm`/`torch.hsmm`/`torch.sspaddmm` | ### ๐ Describe the bug
torch.hsmm/torch.hspmm/torch.hsmm triggered a crash with out-of-bound indices
minimal example:
```
https://colab.research.google.com/drive/1Rrhc-NvKzRbhNYbJvlXWeP6S8syLBTdD?usp=sharing
```
output:
```
munmap_chunk(): invalid pointer
Aborted (core dumped)
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | high priority,module: sparse,module: crash,triaged | low | Critical |
2,551,878,828 | pytorch | false INTERNAL ASSERT FAILED in `torch.empty`/`torch.ones` | ### ๐ Describe the bug
torch.empty/torch.ones raises a false INTERNAL ASSERT FAILED when dtype is qint, accompanied by the message: "please report a bug to PyTorch."
minimal example:
```
import torch
size = 10000
storage = torch.empty(size, dtype=torch.qint8)
storage = torch.ones(size, dtype=torch.qint32) # will also trigger
print(storage)
```
output:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-3-9aa014a798b7>](https://localhost:8080/#) in <cell line: 4>()
2 size = 10000
3 storage = torch.empty(size, dtype=torch.qint8)
----> 4 print(storage)
2 frames
[/usr/local/lib/python3.10/dist-packages/torch/_tensor_str.py](https://localhost:8080/#) in _str_intern(inp, tensor_contents)
550 if not has_default_dtype:
551 suffixes.append("dtype=" + str(self.dtype))
--> 552 suffixes.append("quantization_scheme=" + str(self.qscheme()))
553 if (
554 self.qscheme() == torch.per_tensor_affine
RuntimeError: false INTERNAL ASSERT FAILED at "../aten/src/ATen/quantized/Quantizer.cpp":445, please report a bug to PyTorch. cannot call qscheme on UnknownQuantizer
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | oncall: quantization | low | Critical |
2,551,883,890 | pytorch | false INTERNAL ASSERT FAILED in `torch.jit.set_fusion_strategy` | ### ๐ Describe the bug
torch.jit.set_fusion_strategy raises a false INTERNAL ASSERT FAILED when strategy is invalid , accompanied by the message: "please report a bug to PyTorch."
minimal example:
```
import torch
model_or_tensor = torch.rand(1, 2)
# Create a list of tuples with a valid but invalid strategy ('none')
invalid_strategy = [('none', 0)]
torch.jit.set_fusion_strategy(invalid_strategy)
model = torch.jit.load('traced_bert.pt', map_location=torch.device('cpu'))
```
output:
```
RuntimeError Traceback (most recent call last)
[<ipython-input-5-23ca6d806bae>](https://localhost:8080/#) in <cell line: 6>()
4 # Create a list of tuples with a valid but invalid strategy ('none')
5 invalid_strategy = [('none', 0)]
----> 6 torch.jit.set_fusion_strategy(invalid_strategy)
7 model = torch.jit.load('traced_bert.pt', map_location=torch.device('cpu'))
[/usr/local/lib/python3.10/dist-packages/torch/jit/_fuser.py](https://localhost:8080/#) in set_fusion_strategy(strategy)
159 apis for specific fusers.
160 """
--> 161 return torch._C._jit_set_fusion_strategy(strategy)
RuntimeError: false INTERNAL ASSERT FAILED at "../torch/csrc/jit/python/init.cpp":874, please report a bug to PyTorch. FusionBehavior only supported 'STATIC' or 'DYNAMIC', got: none
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,551,892,792 | godot | Can't preview FPS on the animation panel when editor language is set to ko (Korean) | ### Tested versions
- Reproducible in 4.3.stable, latest master [506d6e4]
### System information
* Godot v4.4.dev (506d6e427) - Windows 10.0.22631 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1050 (NVIDIA; 31.0.15.3623) - Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz (6 threads)
* Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1050 (NVIDIA; 31.0.15.3623) - Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz (6 Threads)
### Issue description
If the language of the editor is set to "ko" (Korean), you can't preview the FPS value (ํ๋ ์) on the animation panel when it's not active.
You can only see it when you click it.
#### latest master [506d6e4]
https://github.com/user-attachments/assets/f48df47b-6179-4125-b103-bbb860270a50
#### v4.3.stable
https://github.com/user-attachments/assets/852b005c-4355-4b2f-992f-192aab2aaa44
### Steps to reproduce
Open the animation panel.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Minor |
2,551,907,671 | pytorch | Thread safety issue with torch.compile() | See https://dev-discuss.pytorch.org/t/impact-of-multithreading-and-local-caching-on-torch-compile/2498 for a crash when two threads call torch.compile().
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @rec @bdhirsh | high priority,triaged,months,oncall: pt2,module: inductor,module: dynamo,module: pt2-dispatcher | low | Critical |
2,551,951,630 | kubernetes | CronJob executed twice: once before and once at the scheduled time | ### What happened?
While using Kubernetes CronJob for scheduling tasks, we encountered an issue where a specific CronJob was executed earlier than its scheduled time. The CronJob ran approximately 6 hours before the intended schedule and then executed again at the correct scheduled time, resulting in the job running twice. This issue was observed in a Kubernetes 1.25.5 environment.
### What did you expect to happen?
We expected the CronJob to execute exactly once at the scheduled time. All CronJobs should run according to their defined schedules without any duplication.
### How can we reproduce it (as minimally and precisely as possible)?
The issue occurred unexpectedly in a CronJob that was previously functioning correctly, and we haven't identified a way to reliably reproduce it. The problem has only happened once so far and has not occurred again since, indicating it may be an intermittent issue. The environment details are as follows:
The affected CronJob is scheduled to run daily.
There are a total of 26 CronJobs running in the cluster, with 12 of them in the affected namespace.
Some of these CronJobs in the same namespace are scheduled to run every minute or hourly.
### Anything else we need to know?
We attempted to identify the issue by checking the kube-controller-manager logs but did not find any anomalies.
### Kubernetes version
<details>
```console
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.25.5
WARNING: version difference between client (1.31) and server (1.25) exceeds the supported minor version skew of +/-1
```
</details>
### Cloud provider
<details>
on-prem
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
</details>
### Install tools
<details>
cluster-api
</details>
### Container runtime (CRI) and version (if applicable)
<details>
containerd 1.6.8
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
calico v3.24.1
</details>
| kind/bug,sig/apps,lifecycle/rotten,needs-triage | low | Critical |
2,551,953,725 | godot | Potential use after free on GDScript callable methods; methods do not capture self object when used as callables. | ### Tested versions
- Reproducible in 4.4.dev2, 4.3.stable, 4.2.2.stable, 4.1.4.stable, and 4.0.4.stable
### System information
Godot v4.3.stable (77dcf97d8) - NixOS #1-NixOS SMP PREEMPT_DYNAMIC Wed Sep 18 17:24:10 UTC 2024 - X11 - GLES3 (Compatibility)
### Issue description
Consider the following code:
```gdscript
extends Node
class Test:
func print_hi():
print("hi")
func other() -> Callable:
return Test.new().print_hi
func _ready():
other().call()
```
The expected result would be that it prints "hi" to the console; however, this instead pushes the following error:
> Attempt to call function 'null::print_hi (Callable)' on a null instance.
Calling `print_hi()` from within `other()` works fine, but the self object seems to be cleaned up after the callable is returned, resulting in calling it on a null instance. I believe the callable should capture its self object when bound like this.
### Steps to reproduce
1. Create a new function.
2. Create new object inside of that function and return a bound callable method from it.
3. Call the function from some other code, and call `.call()` or `.callv()` on the returned callable.
4. Observe the editor error, which says that the initial object is null.
### Minimal reproduction project (MRP)
[test.zip](https://github.com/user-attachments/files/17158845/test.zip)
| bug,topic:gdscript,needs testing | low | Critical |
2,552,026,287 | opencv | Test_Model.Segmentation failes with the new DNN engine | Reference: https://github.com/opencv/opencv/pull/26056
### System Information
Platform: any
### Detailed description
```
[ RUN ] Test_Model.Segmentation/0, where GetParam() = OCV/CPU
/home/ci/opencv/modules/dnn/test/test_common.impl.hpp:83: Failure
Expected: (normL1) <= (l1), actual: 0.0603638 vs 0
|ref| = 7
/home/ci/opencv/modules/dnn/test/test_common.impl.hpp:86: Failure
Expected: (normInf) <= (lInf), actual: 7 vs 0
|ref| = 7
[ FAILED ] Test_Model.Segmentation/0, where GetParam() = OCV/CPU (447 ms)
```
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Critical |
2,552,028,995 | opencv | Test_Model.TextDetectionByDB fails with the new engine | ### System Information
Platform: any
Reference: https://github.com/opencv/opencv/pull/26056
### Detailed description
```
[ RUN ] Test_Model.TextDetectionByDB/0, where GetParam() = OCV/CPU
unknown file: Failure
C++ exception with description "cannot create std::vector larger than max_size()" thrown in the test body.
[ FAILED ] Test_Model.TextDetectionByDB/0, where GetParam() = OCV/CPU (113 ms)
```
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Critical |
2,552,037,188 | transformers | Add support for TimesFM | ### Model description
**TimesFM** (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
- Research Paper: https://arxiv.org/abs/2310.10688
- Authors: [Abhimanyu Das](https://arxiv.org/search/cs?searchtype=author&query=Das,+A), [Weihao Kong](https://arxiv.org/search/cs?searchtype=author&query=Kong,+W), [Rajat Sen](https://arxiv.org/search/cs?searchtype=author&query=Sen,+R), [Yichen Zhou](https://arxiv.org/search/cs?searchtype=author&query=Zhou,+Y)
- Implementation: [google-research/timesfm](https://github.com/google-research/timesfm)
- The linked repository contains code for implementation in `jax` as well `pytorch`. To implement this in `huggingface` the `pytorch` specific code can be found at [src/timesfm/pytorch_patched_decoder.py](https://github.com/google-research/timesfm/blob/master/src/timesfm/pytorch_patched_decoder.py)
- Models Weights: [google/timesfm-1.0-200m-pytorch](https://huggingface.co/google/timesfm-1.0-200m-pytorch)
- Although there are weights given in the repository, yet there are missing config files that are to be completed to ensure smooth loading of weights. | New model,Time Series | low | Minor |
2,552,047,500 | flutter | [go_router_builder] RouteExtension._fromState should be public to prevent non-const widgets from previous routes are rebuilding | ### Use case
https://github.com/flutter/flutter/issues/144511
We have issue `Non-const widgets from previous routes are rebuilding even though new route was pushed.` and
> Since go_router is using pages api, there isn't a way for navigator to know whether page content has changed or not without rebuilding the page.
### Proposal
We can workaround this by make `pageBuilder` always return `const` and use `RouteExtension._fromState` for get route parameter instead of passthrough in `pageBuilder`.
But current `RouteExtension._fromState` is private so I always need to override, it is unnecessary, so `RouteExtension._fromState` should be public?
**Old code which lead to issue page always rebuild when routing other page because `buildPage` result is not const.**
```
@TypedGoRoute<HomeTabRoute>(path: '/home')
class HomeTabRoute extends GoRouteData {
const HomeTabRoute({
this.fromLogin = false,
});
final bool fromLogin;
@override
Page buildPage(BuildContext context, GoRouterState state) {
return NoTransitionPage(
child: HomeTabPage(fromLogin: fromLogin),
);
}
}
```
**New code working as expected but need to write unnecessary code**
```
@TypedGoRoute<HomeTabRoute>(path: '/home')
class HomeTabRoute extends GoRouteData {
const HomeTabRoute({
this.fromLogin = false,
});
final bool fromLogin;
// this is unnecessary
static HomeTabRoute fromState(GoRouterState state) =>
$HomeTabRouteExtension._fromState(state);
@override
Page buildPage(BuildContext context, GoRouterState state) {
return const NoTransitionPage(
child: HomeTabPage(),
);
}
}
class HomeTabPage extends StatefulHookConsumerWidget {
const HomeTabPage({super.key});
@override
ConsumerState<HomeTabPage> createState() => _HomeTabPageState();
}
class _HomeTabPageState extends ConsumerState<HomeTabPage> {
// get parameters instead of passthrough constructor
late final _route = HomeTabRoute.fromState(GoRouterState.of(context));
@override
Widget build(BuildContext context) {
return Text(_route.fromLogin.toString());
}
}
``` | c: new feature,package,c: proposal,P3,p: go_router_builder,team-go_router,triaged-go_router | low | Minor |
2,552,077,208 | opencv | Misleading "'Release' build type is used by default" message | ### System Information
OpenCV version: 4.11-pre, 5.0-pre
Operating System: macOS or Windows
Compiler & compiler version: whatever VS or Xcode supplies
Python version: N/A
### Detailed description
The first thing that developers sees when trying to configure OpenCV with default parameters using CMake is the following message:
```
'Release' build type is used by default. Use CMAKE_BUILD_TYPE to specify build type (Release or Debug)
```
this is true on Linux and on Mac when 'Unix Makefiles' generator is used, but not true on Windows and MacOS when generators for Visual Studio or Xcode, respectively, are used. In projects generated by CMake for those IDEs 'Debug' binaries will be generated by default. I think, generator type should be checked for Xcode or VS before printing this message.
### Steps to reproduce
run CMake from command line or using GUI, specify "Xcode" or "Visual Studio ..." generator
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: build/install | low | Critical |
2,552,077,782 | kubernetes | Allowing 'watch' clients to request watch bookmarks (or optionally increasing frequency of bookmarks) | ### What would you like to be added?
It should be possible for a client of the apiserver that is watching a resource type to 'request' a watch bookmark be sent immediately, similar to how etcd allows requesting progress of a watch (https://github.com/etcd-io/etcd/issues/9855).
Alternatively, having some kind of configurable 'higher frequency' (than the current default 1 minute) for bookmarks may be sufficient. This configuration wouldn't be needed for 99% of watch requests, so I'd propose exposing something like `?watchBookmarkFrequency=100ms&allowWatchBookmarks=true`. We'd need to expose some kind of configurable 'floor' for this value, and also look at whether we need to authorise its usage somehow too (once an analysis on scalability is completed and we've got an idea of the relative extra cost of higher-frequency bookmarks).
### Why is this needed?
When building Kubernetes apiserver proxies, to support consistent reads being served from a proxy layer that maintains its own in-memory cache (i.e. another instantiation of cacher.Cacher), we need some way to either request on demand a watch bookmark, or otherwise 'bound' the amount of time the apiserver must wait for the next bookmark to some semi-predictable frequency.
This would allow a caching proxy to serve consistent reads from its own cache, with similar assurances to what the WatchList feature provides today (e.g. "we can serve a consistent read within 100ms+(overhead)").
More generally, this also brings a closer alignment between the behaviour of etcd (sending frequent(ish) bookmarks) and watch clients, which is advantageous for proxies as we begin to use/lean on this feature (and others) more within the cacher. | sig/api-machinery,kind/feature,triage/accepted | low | Major |
2,552,100,459 | excalidraw | Whitelist https://web.dev to be embedded | Im in process of presenting CLS issues, and i wanted to embed this video url https://web.dev/static/articles/cls/video/web-dev-assets/layout-instability-api/layout-instability2.webm
Could you whitelist it? | Embeddable | low | Minor |
2,552,112,616 | excalidraw | Add or remove selected objects to/from a specific(selected) group | - There's no way to remove a selected object from a group without ungrouping all the members(destroying the group) and grouping again with the new object. If the group is a child of another group, the freed objects will get in the parent and there's no way to create a child group out of some objects of an entire group.
- We can't add an object to an entire group unless grouping the object and the group together that creates a new bigger group involving both, that is not the answer.
- The best state is to have a hierarchical structure of the groups(or all the canvas like figma) to logically move objects between groups.
- Suggested solution:
1. defining a Group type with having its individual paragraph in the JSON.
2. assigning any object to only one group instead of an array of groups
3. defining a parent ID for groups
This structure improves the process of working with groups gets us simpler and cleaner code.
```[tasklist]
### Tasks
```
| enhancement | low | Minor |
2,552,120,197 | godot | When initializing a packed scene, if the packed scene root has a script that chooses a parent on _init(), causes errors and will cause segfault. | ### Tested versions
4.3.1.rc
### System information
Ubuntu 22.04.4 LTS 64-bit
### Issue description
I have a class_name Foo that chooses its parent via a method of a Autoload/Singleton. Foo.new() works fine. However, if I have a load a packed scene whose root is Foo (load("foo.tscn")), trying to remove the child from the parent creates a "Children name does not match parent name in hashtable" error.
Closing the window via the [x] button / quit via the system context menu causes a segfault.
Freeing the object will silently crash.
```
ERROR: Children name does not match parent name in hashtable, this is a bug.
at: remove_child (scene/main/node.cpp:1625)
Orphans:<Node3D#31423727074>
ERROR: Parameter "get_viewport()" is null.
at: _notification (scene/main/node.cpp:144)
ERROR: Children name does not match parent name in hashtable, this is a bug.
at: remove_child (scene/main/node.cpp:1625)
ERROR: Condition "data.parent" is true.
at: ~Node (scene/main/node.cpp:3847)
================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine v4.3.1.rc.custom_build (ff9bc0422349219b337b015643544a0454d4a7ee)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7c8191a42520] (??:0)
[2] Object::notification(int, bool) (/home/leonard/Git/godot/core/object/object.cpp:884)
[3] Object::_predelete() (/home/leonard/Git/godot/core/object/object.cpp:199)
[4] predelete_handler(Object*) (/home/leonard/Git/godot/core/object/object.cpp:2125)
[5] void memdelete<Node>(Node*) (/home/leonard/Git/godot/./core/os/memory.h:112)
[6] Node::_notification(int) (/home/leonard/Git/godot/scene/main/node.cpp:245)
[7] Node::_notificationv(int, bool) (/home/leonard/Git/godot/./scene/main/node.h:50 (discriminator 14))
[8] Node3D::_notificationv(int, bool) (/home/leonard/Git/godot/./scene/3d/node_3d.h:52)
[9] Object::notification(int, bool) (/home/leonard/Git/godot/core/object/object.cpp:890)
[10] Object::_predelete() (/home/leonard/Git/godot/core/object/object.cpp:199)
[11] predelete_handler(Object*) (/home/leonard/Git/godot/core/object/object.cpp:2125)
[12] void memdelete<Node>(Node*) (/home/leonard/Git/godot/./core/os/memory.h:112)
[13] Node::_notification(int) (/home/leonard/Git/godot/scene/main/node.cpp:245)
[14] Node::_notificationv(int, bool) (/home/leonard/Git/godot/./scene/main/node.h:50 (discriminator 14))
[15] CanvasItem::_notificationv(int, bool) (/home/leonard/Git/godot/./scene/main/canvas_item.h:45)
[16] Control::_notificationv(int, bool) (/home/leonard/Git/godot/./scene/gui/control.h:48)
[17] Object::notification(int, bool) (/home/leonard/Git/godot/core/object/object.cpp:890)
[18] Object::_predelete() (/home/leonard/Git/godot/core/object/object.cpp:199)
[19] predelete_handler(Object*) (/home/leonard/Git/godot/core/object/object.cpp:2125)
[20] void memdelete<Node>(Node*) (/home/leonard/Git/godot/./core/os/memory.h:112)
[21] Node::_notification(int) (/home/leonard/Git/godot/scene/main/node.cpp:245)
[22] Node::_notificationv(int, bool) (/home/leonard/Git/godot/./scene/main/node.h:50 (discriminator 14))
[23] Viewport::_notificationv(int, bool) (/home/leonard/Git/godot/./scene/main/viewport.h:95)
[24] Window::_notificationv(int, bool) (/home/leonard/Git/godot/./scene/main/window.h:44)
[25] Object::notification(int, bool) (/home/leonard/Git/godot/core/object/object.cpp:890)
[26] Object::_predelete() (/home/leonard/Git/godot/core/object/object.cpp:199)
[27] predelete_handler(Object*) (/home/leonard/Git/godot/core/object/object.cpp:2125)
[28] void memdelete<Window>(Window*) (/home/leonard/Git/godot/./core/os/memory.h:112)
[29] SceneTree::finalize() (/home/leonard/Git/godot/scene/main/scene_tree.cpp:645)
[30] OS_LinuxBSD::run() (/home/leonard/Git/godot/platform/linuxbsd/os_linuxbsd.cpp:967)
[31] /home/leonard/Git/godot/bin/godot.linuxbsd.editor.dev.x86_64(main+0x190) [0x5f09e01af539] (/home/leonard/Git/godot/platform/linuxbsd/godot_linuxbsd.cpp:85)
[32] /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7c8191a29d90] (??:0)
[33] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7c8191a29e40] (??:0)
[34] /home/leonard/Git/godot/bin/godot.linuxbsd.editor.dev.x86_64(_start+0x25) [0x5f09e01af2e5] (??:?)
-- END OF BACKTRACE --
================================================================
```
### Steps to reproduce
I have attached an MRP.
The class_name Foo accesses a singleton to be able to be added to the tree.
The main script will load a packedscene of Foo on ready, which is when the issues should start.
### Minimal reproduction project (MRP)
[parent_crash.zip](https://github.com/user-attachments/files/17159971/parent_crash.zip)
| bug,topic:core,needs testing,crash | low | Critical |
2,552,156,735 | react | [DevTools Bug] Cannot add node "20468" because a node with that id is already in the Store. | ### Website or app
https://demo.ragflow.io/
### Repro steps
Login into the site and checkout components at any page. The error will pop up.
### How often does this bug happen?
Every time
### DevTools package (automated)
react-devtools-extensions
### DevTools version (automated)
5.3.1-ccb20cb88b
### Error message (automated)
Cannot add node "20468" because a node with that id is already in the Store.
### Error call stack (automated)
```text
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1172435
at v.emit (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1141877)
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1143565
at bridgeListener (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1551564)
```
### Error component stack (automated)
_No response_
### GitHub query string (automated)
```text
https://api.github.com/search/issues?q=Cannot add node because a node with that id is already in the Store. in:title is:issue is:open is:public label:"Component: Developer Tools" repo:facebook/react
```
| Type: Bug,Status: Unconfirmed,Component: Developer Tools | low | Critical |
2,552,160,810 | vscode | SCM Graph - allow to multi select for e.g. cherry-pick | Would really like to use the graph view to perform bulk-operations, such as cherry-picking ๐

| feature-request,scm | low | Minor |
2,552,161,217 | godot | [3.x] `_input` reports `is_action_just_pressed()` multiple times per frame | ### Tested versions
Reproducible in 3.6 stable
Not reproducible in 3.5.3
### System information
All
### Issue description
If `_input()` is called multiple times per frame, then `is_action_just_pressed("some_action")` will return true every time until the next frame.
I'm not yet sure whether this affects other actions aside from mouse wheel.
UPDATE:
* It does not occur with regular mouse buttons for me, only mouse wheel
* Doesn't occur for key presses
It seems to occur with mousewheel because this creates a `press` followed by `release` in rapid succession, whereas regular mouse click is unlikely to press and release on the same frame.
I saw reported on reddit:
https://www.reddit.com/r/godot/comments/1fgi328/36_registers_mouse_wheel_event_more_than_once_35/
But wanted to create an issue for reference.
The change in behaviour is introduced by my PR #77040 which fixes #73339 .
This is a similar confusion as #80158 in master (which likely would also occur in 3.x).
### Steps to reproduce
```
func _input(_event):
print("input frame: ", Engine.get_idle_frames())
if Input.is_action_just_pressed("wheel_up"):
print("wheel_up")
if Input.is_action_just_pressed("wheel_down"):
print("wheel_down")
```
Results in 3.6:
```
input frame: 6
input frame: 7
input frame: 42
wheel_down
input frame: 42
wheel_down
input frame: 57
wheel_down
input frame: 57
wheel_down
input frame: 197
input frame: 198
input frame: 199
```
Results in 3.5.3:
```
input frame: 6
input frame: 7
input frame: 42
wheel_down
input frame: 42
input frame: 57
wheel_down
input frame: 57
input frame: 197
input frame: 198
```
i.e. the `is_action_just_pressed()` would only occur once.
### Minimal reproduction project (MRP)
[Double_Input.zip](https://github.com/user-attachments/files/17160265/Double_Input.zip)
## Discussion
I am hesitating to call this a "bug", but it is a change in behaviour.
To some extent this has always been the case, because if multiple `_input()` come in on a single frame for any action that is just pressed, it will, in most cases report the same action as true multiple times. That is correct, because any action pressed *IS* considered pressed throughout the frame.
What did happen previously though was an anomaly: on a mouse button press the press created the input with the action just pressed, and the depress created an input with the action just pressed set to false (on the same frame).
This resulted in "expected" behaviour for handling input via `_input()`, however created the bug #73339 . Fixing the bug can result in less "expected" behaviour via `_input()`, and can make input handling more complex in this case.
To make life easier we could look at an option to simply auto-cancel the action once it had been read once with `is_action_just_pressed()`, which would be friendly to both solutions to reading input. | discussion,documentation,topic:input,regression | low | Critical |
2,552,223,020 | rust | Update min supported musl to 1.2.5 | The currently min supported musl version for the x86_64-unknown-linux-musl target is 1.2.3. The statx syscall is used to determine the file creation time. statx wasn't added to musl until version 1.2.5, so querying the file creation time leads to a `creation time is not available on this platform currently` error. For more background see the discussion.
- https://users.rust-lang.org/t/musl-and-file-creation-time/111559 | O-musl,T-libs | low | Critical |
2,552,244,699 | vscode | Test: filter file to test could be more readable | 
I would put the file name first and description second, like we do in file picker:

| polish,testing,test-coverage | low | Minor |
2,552,265,413 | vscode | Editor Sticky Scroll doesn't rerender when token colouring comes in after widget has been rendered | I'm using the tree sitter colorization which currently is slower on start up than the sticky scroll rendering. The Sticky Scroll widget does not seem to listen on tokenization colouring updates.
<img width="1301" alt="image" src="https://github.com/user-attachments/assets/6d1f089d-57e9-49b8-82c7-64f0f9c07cfa">
| bug,editor-sticky-scroll | low | Major |
2,552,410,765 | TypeScript | autoImportSpecifierExcludeRegexes needs a window refresh for changes to take effect | ### ๐ Search Terms
autoImportSpecifierExcludeRegexes
### ๐ Version & Regression Information
- This changed between versions ______ and _______
- This changed in commit or PR _______
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _________
- I was unable to test this on prior versions because _______
### โฏ Playground Link
_No response_
### ๐ป Code
.vscode/settings.json
```json
{
"<language>.preferences.autoImportSpecifierExcludeRegexes": ["<regex>"]
}
```
### ๐ Actual behavior
Changes only take effect after a window reload.
### ๐ Expected behavior
Changes to take effect immediately, same way that `autoImportFileExcludePatterns` does.
### Additional information about the issue
I mentioned this earlier on another issue (https://github.com/microsoft/TypeScript/issues/35395#issuecomment-2371010892) and I initially thought that the update I got fixed the problem, turns out it was just because that forced the window to refresh, as I later encountered the same problem once I tried adding new regexes to it. | Needs Investigation | low | Major |
2,552,461,499 | vscode | Source Control Graph - "Go to file" opens full path and does not update explorer | Type: <b>Bug</b>
View a commit in the SCG, then on the diffs tab click on "Go to file" icon. The file opens correctly but it uses the full path and the Explorer pane does not get updated with the file location.
VS Code version: Code 1.93.1 (Universal) (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Pro (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|2, 2, 2|
|Memory (System)|32.00GB (0.78GB free)|
|Process Argv|--crash-reporter-id f1725485-c9d4-4a1e-b3f7-bcbac394420d|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (12)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-eslint|dba|3.0.10
githistory|don|0.6.20
prettier-vscode|esb|11.0.0
vscode-docker|ms-|1.29.3
remote-wsl|ms-|0.88.4
powershell|ms-|2024.2.2
vscode-thunder-client|ran|2.25.8
vscode-yaml|red|1.15.0
code-spell-checker|str|3.0.1
code-spell-checker-spanish|str|2.3.4
vscode-gradle|vsc|3.16.4
volar|Vue|2.1.6
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
9c06g630:31013171
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
wkspc-ranged-t:31125599
cf971741:31144450
pme_test_c:31118331
fje88620:31121564
iacca1:31144502
```
</details>
<!-- generated by issue reporter -->

| feature-request,git,multi-diff-editor | medium | Critical |
2,552,471,288 | angular | Error while building the new docs in Windows 11 | ### Describe the problem that you experienced
I get the following error while trying to build adev.
### Enter the URL of the topic with the problem
_No response_
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
[1,699 / 1,771] Action packages/platform-browser/platform-browser_docs_api.json;
6s local ... (15 actions running)
ERROR: C:/users/bampa/documents/github/angular/packages/platform-browser/testing
/BUILD.bazel:28:18: Action packages/platform-browser/testing/platform-browser_te
sting_docs_api.json failed: (Exit 1): extract_api_to_json.bat failed: error exec
uting command bazel-out\x64_windows-opt-exec-2B5CBBC6\bin\adev\shared-docs\pipel
ine\api-gen\extraction\extract_api_to_json.bat ... (remaining 2 arguments skippe
d)
[link_node_modules.js] An error has been reported: [Error: ENOENT: no such file
or directory, unlink 'C:\Users\bampa\_bazel_bampa\b4ricovm\execroot\angular\baze
l-out\x64_windows-opt-exec-2B5CBBC6\bin\adev\shared-docs\pipeline\api-gen\extrac
tion\extract_api_to_json.bat.runfiles\angular\node_modules'] {
errno: -4058,
code: 'ENOENT',
syscall: 'unlink',
path: 'C:\\Users\\bampa\\_bazel_bampa\\b4ricovm\\execroot\\angular\\bazel-out\
\x64_windows-opt-exec-2B5CBBC6\\bin\\adev\\shared-docs\\pipeline\\api-gen\\extra
ction\\extract_api_to_json.bat.runfiles\\angular\\node_modules'
} Error: ENOENT: no such file or directory, unlink 'C:\Users\bampa\_bazel_bampa\
b4ricovm\execroot\angular\bazel-out\x64_windows-opt-exec-2B5CBBC6\bin\adev\share
d-docs\pipeline\api-gen\extraction\extract_api_to_json.bat.runfiles\angular\node
_modules'
```
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
_No response_
### Provide any additional information here in as much as detail as you can
_No response_ | area: docs-infra | low | Critical |
2,552,473,012 | transformers | Tuning generation_config in Trainer hyperparameter_search (Optuna backend) | ### Feature request
Adding generation configurations to the parameters that can be tuned in a `Trainer`.
### Motivation
When defining the Optuna hyper-parameter space, I would like to investigate whether or not different generation configurations can affect performance. For example, something as simple as: is beam search with groups better than standard beam search?
Example of implementation:
```python
def optuna_hp_space(trial):
# Define default generation parameters
generation_params = {
"max_length": 512,
"max_new_tokens": 512,
'top_k': 20,
}
# Define the generation strategies and pick one with Optuna
# REF: https://github.com/huggingface/transformers/blob/v4.44.2/src/transformers/generation/configuration_utils.py#L71
generation_strategy_params = {
"greedy": {"num_beams": 1, "do_sample": False},
"contrastive_search": {"penalty_alpha": 0.1, "top_k": 10},
"multinomial_sampling": {"num_beams": 1, "do_sample": True},
"beam_search_decoding": {"num_beams": 5, "do_sample": False},
"beam_search_multinomial_sampling": {"num_beams": 5, "do_sample": True},
"diverse_beam_search_decoding": {"num_beams": 5, "num_beam_groups": 5, "diversity_penalty": 1.0},
}
gen_strategy = trial.suggest_categorical("generation_strategy", list(generation_strategy_params.keys()))
generation_params.update(generation_strategy_params[gen_strategy])
# Update the generation params with the temperature
temperature = trial.suggest_float("temperature", 0.5, 1.1, log=False)
generation_params["temperature"] = temperature
# Instantiate a GenerationConfig object to pass to the Trainer arguments
generation_config = GenerationConfig(**generation_params)
# Setup learning rate warmup ratio
warmup_ratio = trial.suggest_float("warmup_ratio", 0.0, 0.1, step=0.01)
# Setup learning rate scheduler type and its fixed kwargs
lr_scheduler_type = trial.suggest_categorical("lr_scheduler_type", ["cosine", "cosine_with_restarts", "reduce_lr_on_plateau"]) # "cosine_with_min_lr", "polynomial"
if lr_scheduler_type == "cosine":
lr_scheduler_kwargs = {}
elif lr_scheduler_type == "cosine_with_restarts":
lr_scheduler_kwargs = {"num_cycles": 5}
elif lr_scheduler_type == "cosine_with_min_lr":
lr_scheduler_kwargs = {"min_lr": 1e-6}
elif lr_scheduler_type == "polynomial":
lr_scheduler_kwargs = {"power": 1.0}
elif lr_scheduler_type == "reduce_lr_on_plateau":
lr_scheduler_kwargs = {"min_lr": 1e-6}
return {
"learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-3, log=True),
"lr_scheduler_type": lr_scheduler_type,
"lr_scheduler_kwargs": lr_scheduler_kwargs,
"warmup_ratio": warmup_ratio,
# "generation_config": generation_params, # <-- BREAKING: PASSING THE KWARGS
# "generation_config": generation_config, # <-- BREAKING: PASSING THE INSTANTIATED OBJECT
# **{f"generation_{k}": v for k, v in generation_params.items()}, # <-- NOT BREAKING, BUT ORIGINAL VALUES ARE USED INSTEAD OF THESE
**generation_params # <-- NOT BREAKING, BUT ORIGINAL VALUES ARE USED INSTEAD OF THESE
}
```
### Your contribution
Currently I'm experiencing the following error:
```log
Traceback (most recent call last):
File "/cephyr/users/ribes/Alvis/PROTAC-Splitter/src/train_model.py", line 18, in <module>
CLI([train_model, train_ppo_model])
File "/opt/conda/lib/python3.10/site-packages/jsonargparse/_cli.py", line 119, in CLI
return _run_component(component, init.get(subcommand))
File "/opt/conda/lib/python3.10/site-packages/jsonargparse/_cli.py", line 204, in _run_component
return component(**cfg)
File "/cephyr/users/ribes/Alvis/PROTAC-Splitter/protac_splitter/llms/training.py", line 277, in train_model
best_trials = trainer.hyperparameter_search(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3217, in hyperparameter_search
best_run = backend_obj.run(self, n_trials, direction, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/hyperparameter_search.py", line 72, in run
return run_hp_search_optuna(trainer, n_trials, direction, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/integrations/integration_utils.py", line 260, in run_hp_search_optuna
study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs, gc_after_trial=gc_after_trial)
File "/opt/conda/lib/python3.10/site-packages/optuna/study/study.py", line 475, in optimize
_optimize(
File "/opt/conda/lib/python3.10/site-packages/optuna/study/_optimize.py", line 63, in _optimize
_optimize_sequential(
File "/opt/conda/lib/python3.10/site-packages/optuna/study/_optimize.py", line 160, in _optimize_sequential
frozen_trial = _run_trial(study, func, catch)
File "/opt/conda/lib/python3.10/site-packages/optuna/study/_optimize.py", line 248, in _run_trial
raise func_err
File "/opt/conda/lib/python3.10/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
File "/opt/conda/lib/python3.10/site-packages/transformers/integrations/integration_utils.py", line 247, in _objective
trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1889, in train
self._hp_search_setup(trial)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1517, in _hp_search_setup
value = type(old_attr)(value)
TypeError: GenerationConfig.__init__() takes 1 positional argument but 2 were given
```
Which makes me suspect that a _single_ `GenerationConfig` object is created _once for all trials_. This is "in contrast" to the model instantiation, which must be a `Callable`, as specified in the documentation for the `hyperparameter_search` method. | trainer,Feature request,Generation | low | Critical |
2,552,479,448 | ollama | Respect the Access-Control-Allow-Private-Network in Chrome | ### What is the issue?
I'm testing ollama from an environment hosted on repl.it running in the browser. I have a local version of ollama running with the `OLLAMA_HOST=*, https://48a38c67-3eda-41cf-804b-e04fba963d55-00-14tthqngapcgy.worf.replit.dev` (other variations result in the same error).
It looks like the new `Access-Control-Allow-Private-Network` is starting to be enforced when accessing the ollama server from a non-localhost origin.
Chrome: 129.0.6668.58 - Works
Chrome: 131.0.6742.0 - Fails with the error `Access to fetch at 'http://127.0.0.1:11434/api/generate' from origin 'https://48a38c67-3eda-41cf-804b-e04fba963d55-00-14tthqngapcgy.worf.replit.dev' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Private-Network' header was present in the preflight response for this private network request targeting the `local` address space.`
It looks like somewhere between these two versions, we (Chrome) started to enforce requiring the response from locally hosted ollama servers to require `Access-Control-Allow-Private-Network: true` HTTP Header.
### OS
Linux, macOS, Windows
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.12 | bug | low | Critical |
2,552,481,054 | vscode | Editor `z-index` challenges | We have seen a couple of bugs related to z-index and it time to (1) spell out all places and (2) assign values to features.
* minimap
* scrollbar slider
* sticky scroll
* zone widgets
* etc ...
| debt,editor-core | low | Critical |
2,552,528,073 | transformers | Request for a clear documentation for .generate() | ### Feature request
The `.generate()` function has a lot of parameters, for example `length_penalty` and `diversity_penalty`. However, the [documentation](https://huggingface.co/docs/transformers/v4.45.1/en/main_classes/text_generation#transformers.GenerationMixin.generate) of this function does not document a full list of parameters, hiding them in **kwargs. It says that "For an overview of generation strategies and code examples, check out the [following guide](https://huggingface.co/docs/transformers/v4.45.1/en/generation_strategies)". However, this guide also do not document a full list of parameters. It only says that "For the complete list of the available parameters, refer to the [API documentation](https://huggingface.co/docs/transformers/v4.45.1/en/main_classes/text_generation.md)." However, the link is broken. So, could you please document all the available parameters?
UPD I found the [full documentation](https://huggingface.co/docs/transformers/v4.18.0/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) through Google in the earlier version of Transformers. Why it was removed?
### Motivation
The full documentation for `.generate()` is either missing or hidden somewhere
### Your contribution
No, sorry, I can't write a doumentation, on the contrary, I need it to understand how the library works. | Feature request | low | Critical |
2,552,543,970 | tauri | [bug] WebSocket HMR not working as expected with Nuxt | ### Describe the bug
The methods described in this [document ](https://v2.tauri.app/start/frontend/nuxt/) does not mark HMR work on mobile devices.
Because Nuxt does not use vite's HMR config. Nuxt rewrite vite config, and uses `ws://$host:$port/_nuxt/` to provide HMR:
```log
09-27 18:01:57.473 9019 9019 E Tauri/Console: File: http://tauri.localhost/_nuxt/@vite/client - Line 535 - Msg: WebSocket connection to 'ws://tauri.localhost/_nuxt/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
09-27 18:01:57.473 9019 9019 E Tauri/Console: File: http://tauri.localhost/_nuxt/@vite/client - Line 535 - Msg: Uncaught (in promise) SyntaxError: Failed to construct 'WebSocket': The URL 'ws://localhost:undefined/_nuxt/' is invalid.
```
It tries to connect to `localhost:undefined` and `tauri.localhost`, but this is incorrect.
All endpoints have been tried and there is no WebSocket support. This may require a direct connection to the host.
### Reproduction
From official document: https://v2.tauri.app/start/frontend/nuxt/
```bash
adb devices # connect a Android devices ...
pnpm tauri android dev
```
If this is not detailed enough, please call me for reproduction.
### Expected behavior
No error.
### Full `tauri info` output
```text
[โ] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
โ WebView2: 129.0.2792.52
โ MSVC: Visual Studio Community 2022
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.14.0
- pnpm: 9.11.0
- npm: 10.7.0
- bun: 1.1.28
[-] Packages
- tauri ๐ฆ: 2.0.0-rc.15
- tauri-build ๐ฆ: 2.0.0-rc.12
- wry ๐ฆ: 0.43.1
- tao ๐ฆ: 0.30.1
- @tauri-apps/api ๎: 2.0.0-rc.5
- @tauri-apps/cli ๎: 2.0.0-rc.16
[-] Plugins
- tauri-plugin-log ๐ฆ: 2.0.0-rc.2
- @tauri-apps/plugin-log ๎: not installed!
[-] App
- build-type: bundle
- CSP: connect-src ws://*
- frontendDist: ../dist
- devUrl: http://localhost:3000/
- framework: Vue.js (Nuxt)
- bundler: Webpack
```
### Stack trace
```text
09-27 18:02:41.207 9019 9019 E Tauri/Console: File: http://tauri.localhost/_nuxt/@vite/client - Line 535 - Msg: WebSocket connection to 'ws://tauri.localhost/_nuxt/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
09-27 18:02:41.209 9019 9019 E Tauri/Console: File: http://tauri.localhost/_nuxt/@vite/client - Line 535 - Msg: Uncaught (in promise) SyntaxError: Failed to construct 'WebSocket': The URL 'ws://localhost:undefined/_nuxt/' is invalid.
09-27 18:04:34.822 9019 9019 I HwViewRootImpl: removeInvalidNode jank list is null
09-27 18:04:37.867 9019 9019 I HwViewRootImpl: removeInvalidNode jank list is null
09-27 18:04:40.996 9019 9019 E Tauri/Console: File: http://tauri.localhost/__nuxt_devtools__/client/_nuxt/l4ouzdbv.js - Line 8 - Msg: Uncaught (in promise) Error: [birpc] timeout on calling "getOptions"
09-27 18:04:41.307 9019 9019 E Tauri/Console: File: http://tauri.localhost/__nuxt_devtools__/client/_nuxt/l4ouzdbv.js - Line 8 - Msg: Uncaught (in promise) Error: [birpc] timeout on calling "getModuleOptions"
09-27 18:04:41.309 9019 9019 E Tauri/Console: File: http://tauri.localhost/__nuxt_devtools__/client/_nuxt/l4ouzdbv.js - Line 8 - Msg: Uncaught (in promise) Error: [birpc] timeout on calling "getOptions"
09-27 18:04:41.312 9019 9019 E Tauri/Console: File: http://tauri.localhost/__nuxt_devtools__/client/_nuxt/l4ouzdbv.js - Line 8 - Msg: Uncaught (in promise) Error: [birpc] timeout on calling "telemetryEvent"
09-27 18:04:41.314 9019 9019 E Tauri/Console: File: http://tauri.localhost/__nuxt_devtools__/client/_nuxt/l4ouzdbv.js - Line 8 - Msg: Uncaught (in promise) Error: [birpc] timeout on calling "getOptions"
```
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,552,559,149 | pytorch | Pytorch picks wrong cuda version for building extensions | ### ๐ Describe the bug
I have on my arch linux system cuda 12.6
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Aug_14_10:10:22_PDT_2024
Cuda compilation tools, release 12.6, V12.6.68
Build cuda_12.6.r12.6/compiler.34714021_0
CUDA_HOME=""
CUDA_PATH=/opt/cuda
```
However, I want to build pytorch extension with a cuda 11.8
So, I create a conda environment with this env.yml
```env.yml
name: wrong-cuda
channels:
- nvidia/label/cuda-11.8.0
- pytorch
- conda-forge
dependencies:
- python 3.10.*
- cuda *
- pytorch-cuda 11.8.*
- pytorch 2.*
- torchvision >=0.17.0,<0.18
- gcc 11.*
- gxx >=11.4.0,<11.5
- setuptools >=75.1.0,<76
- numpy 1.26.*
- pip
```
and run:
```
conda env create --file env.yml;
conda activate wrong-cuda
```
Then I clone a simple repo with pytorch extension and install it with pip install:
```
git clone https://gitlab.inria.fr/bkerbl/simple-knn
pip install ./simple_knn
```
It returns
```
The detected CUDA version (12.6) mismatches the version that was used to compile
PyTorch (11.8). Please make sure to use the same CUDA versions.
```
Even if nvcc --version shows correctly 11.8 not 12.6
**The workaround:**
```export CUDA_HOME=$CONDA_PREFIX```
**However, user should not set the CUDA_HOME environment variable, to build this.**
Pytorch should use cuda which is shown in ```nvcc --version```
### Versions
```
python collect_env.py
--2024-09-27 13:16:08-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 23357 (23K) [text/plain]
Saving to: โcollect_env.py.1โ
collect_env.py.1 100%[=========================================================>] 22,81K --.-KB/s in 0,007s
2024-09-27 13:16:08 (3,31 MB/s) - โcollect_env.py.1โ saved [23357/23357]
Collecting environment information...
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: EndeavourOS Linux (x86_64)
GCC version: (conda-forge gcc 11.4.0-13) 11.4.0
Clang version: 18.1.8
CMake version: version 3.30.3
Libc version: glibc-2.40
Python version: 3.10.15 | packaged by conda-forge | (main, Sep 20 2024, 16:37:05) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.10.10-arch1-1-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3060
GPU 1: NVIDIA RTX A5000
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7900X 12-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 81%
CPU max MHz: 5733,0000
CPU min MHz: 545,0000
BogoMIPS: 9385,66
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.2
[pip3] torchvision==0.17.2
[pip3] triton==2.2.0
[conda] blas 1.0 mkl conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libopenvino-pytorch-frontend 2024.4.0 h5888daf_0 conda-forge
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.26.4 py310hb13e2d6_0 conda-forge
[conda] pytorch 2.2.2 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtriton 2.2.0 py310 pytorch
[conda] torchvision 0.17.2 py310_cu118 pytorch
```
cc @malfet @zou3519 @xmfan @ptrblck @msaroufim | module: cpp-extensions,module: cuda,triaged | low | Critical |
2,552,576,275 | godot | Debug Breakpoint are not triggered in Web Exports | ### Tested versions
- 4.4 dev2
### System information
MacOS , Windows 11
### Issue description
When running a Godot project from the editor targeting a web export, the running game on the browser successfully connects with the debugger, but breakpoint functionality is not working.
All other debugging features (Profiling, Monitors) are supported and functioning correctly.
However, breakpoints are never triggered.
### Steps to reproduce
- Open a Godot project.
- Set a breakpoint in any script.
- Enable Remote Debug from the Debug menu.
- Run the project from the editor with a web export target.
- Observe that the game connects to the debugger, but breakpoints are not triggered.
### Minimal reproduction project (MRP)
[WebDebug.zip](https://github.com/user-attachments/files/17162815/WebDebug.zip)
| bug,platform:web,topic:editor,topic:porting,needs testing | low | Critical |
2,552,609,346 | kubernetes | Sidecar containers can be interpreted as init containers | ### Scenario
You're given a manifest (or template, etc) to run, and you try this, and the app doesn't work how you expect. When you investigate you see that a long-lived sidecar container is being run as an init container so the app never actually starts.
You wrongly conclude that the manifest (or template, etc) is wrong and that a sidecar it specifies should actually be defined as an app container. In fact, you have misunderstood that Kubernetes defines sidecars within a field named `initContainers`.
:information_source: At the time of writing, Kubernetes v1.31 was the most recent minor release of Kubernetes.
### Challenge
I think older `kubectl` and older Kubernetes (no longer supported) may treat a new-style sidecar as an init container because older Kubernetes defaults to ignoring unrecognized fields.
Also, [poorly implemented] mutating admission webhooks can strip out the unknown fields even with the latest Kubernetes releases.
Before graduating to GA, perhaps we can ensure that this doesn't happen. I am not sure how but it would be great if somehow manifests that specify sidecars can't ever be misunderstood as manifests that specify init containers.
See https://github.com/kubernetes/website/pull/48101 and https://github.com/kubernetes/website/pull/48014
I am not sure how, but several people have become convinced that the docs are wrong and that the sample manifest needs fixing. It is possibly a localization snag, but I also suspect that the particular way we're changing an existing API doesn't help.
Relevant to https://github.com/kubernetes/enhancements/issues/753
/sig node | sig/node,needs-triage | medium | Minor |
2,552,649,791 | godot | Object with "Distance Fade" enabled does not cast shadows. | ### Tested versions
- 4.3
### System information
Win 10
### Issue description
The wall has Distance Fade enabled. It stops to casting shadows.

https://github.com/godotengine/godot/issues/69329#issuecomment-1330895397
>To resolve this, use the Pixel Dither or Object Dither transparency mode for Distance Fade.
Pixel Dither does not resolve this.
Also if this is intended then the tooltip should say that the object will stop casting shadows.

But object still do cast shadows from direct light:

Which makes me think that light camera is affected by same shader and are just to close to object. Which means light camera pass should use different settings/different shader.
Changing distance to very small number solves the shadows but looses its usability meaning ofc:

Another detail:
This is GridMap. Which means that material was assigned to mesh material slot and not to OverrideMaterial Slot of MeshInstance (because GridMap has no Override Material slot, but it should!)

### Steps to reproduce
Enable Distance Fade on material and put light close to object.
### Minimal reproduction project (MRP)
N/A | bug,topic:rendering,needs testing,topic:3d | low | Minor |
2,552,679,058 | neovim | remote: "--server name" should connect to the "name" (as opposed to file/address) | ### Problem
--listen foo causes nvim to create a unix socket at `/run/user/<UID>/foo.<PID>.<COUNTER>`
--server foo causes nvim to try and connect to a unix socket `foo` in the current directory.
This means that starting a server called 'foo' and then trying to connect to it doesn't work.
### Steps to reproduce
nvim --headless --listen foo &
nvim --remote-ui --server foo
<observe connection refused error>
### Expected behavior
Rather than erroring out, if the servername passed to `--server` doesn't contain any slashes, if there is a single matching socket in `/run/<UID>/<NAME>.*`, connect to that, otherwise list the candidate servers and exit with an error.
This would improve the typical case (there's a single server called `foo`), and improve the (already erroring) invocation in other cases.
### Nvim version (nvim -v)
NVIM v0.11.0-dev-842+ga9287dd88
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
Arch Linux
### Terminal name/version
wezterm
### $TERM environment variable
xterm-256color
### Installation
`neovim-nightly-bin` AUR package (installs nightly github release) | enhancement,server | low | Critical |
2,552,689,905 | langchain | Bug: model Field Missing Error in Replicate Class Due to Misidentification in Validator | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms import Replicate
replicate = Replicate(
model="meta/meta-llama-3-405b-instruct",
model_kwargs={"temperature": 0.7}
)
### Error Message and Stack Trace (if applicable)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Replicate
model
Field required [type=missing, input_value=..., input_type=..., ...]
### Description
When initializing the Replicate class, the model field is incorrectly treated as an extra field and moved into model_kwargs, resulting in a validation error. This happens due to the way the build_extra validator processes the field aliases.
Suggested Fix:
In the build_extra method, replace the use of field.alias with field.name to ensure proper recognition of all fields:
`@model_validator(mode="before")
@classmethod
def build_extra(cls, values: Dict[str, Any]) -> Any:
all_required_field_names = {field.name for field in get_fields(cls).values()}
# Remaining logic...`
Or defining "model" as an alias in a Field can also solve the problem :
`class Replicate(LLM):
model: str = Field(..., alias="model")`
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.12.6 (main, Sep 10 2024, 00:05:17) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.6
> langchain: 0.3.1
> langchain_community: 0.3.1
> langsmith: 0.1.129
> langchain_chroma: 0.1.4
> langchain_huggingface: 0.1.0
> langchain_ollama: 0.2.0
> langchain_openai: 0.2.0
> langchain_text_splitters: 0.3.0
> langserve: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: 4.0.3
> chromadb: 0.5.7
> dataclasses-json: 0.6.7
> fastapi: 0.114.2
> httpx: 0.27.2
> huggingface-hub: 0.24.7
> jsonpatch: 1.33
> numpy: 1.26.4
> ollama: 0.3.3
> openai: 1.45.1
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.1
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> sentence-transformers: 3.1.0
> SQLAlchemy: 2.0.34
> sse-starlette: 1.8.2
> tenacity: 8.5.0
> tiktoken: 0.7.0
> tokenizers: 0.19.1
> transformers: 4.44.2
> typing-extensions: 4.12.2 | help wanted,๐ค:bug,stale | low | Critical |
2,552,707,922 | pytorch | INTERNAL ASSERT FAILED in `torch.cuda.current_stream/default_stream/ExternalStream/set_per_process_memory_fraction` | ### ๐ Describe the bug
torch.cuda.current_stream/default_stream a false INTERNAL ASSERT FAILED when the device does not exist, accompanied by the message: "please report a bug to PyTorch."
**torch.cuda.current_stream/default_stream**
minimal example:
```
import torch
device = torch.cuda.device_count() + 1
torch.cuda.current_stream(device) # INTERNAL ASSERT FAILED
torch.cuda.default_stream(device) # INTERNAL ASSERT FAILED
torch.cuda.set_per_process_memory_fraction(0.5, device) # INTERNAL ASSERT FAILED
```
output:
```
Traceback (most recent call last):
File "/home/work/mannul/pytorch/torch.cuda.current_stream.py", line 5, in <module>
torch.cuda.current_stream(device)
File "/home/miniconda3/envs/torch2.4.1/lib/python3.9/site-packages/torch/cuda/__init__.py", line 918, in current_stream
streamdata = torch._C._cuda_getCurrentStream(
RuntimeError: device_index >= 0 && device_index < num_gpus INTERNAL ASSERT FAILED at "../c10/cuda/CUDAStream.cpp":247, please report a bug to PyTorch
```
**torch.cuda.ExternalStream**
minimal example:
```
import torch
class MyExternalStreamClass:
def __init__(self):
self.stream = torch.cuda.ExternalStream(5)
def launch_kernel_in_external_stream(self):
event = torch.cuda.Event()
event.record(self.stream)
event.wait(block=False)
print("Kernel execution status:", event.is_set())
external_stream = MyExternalStreamClass()
external_stream.launch_kernel_in_external_stream()
```
output:
```
Traceback (most recent call last):
File "", line 14, in <module>
external_stream.launch_kernel_in_external_stream()
File "", line 9, in launch_kernel_in_external_stream
event.record(self.stream)
File "", line 185, in record
super().record(stream)
RuntimeError: streamType >= 1 && streamType <= max_stream_priorities INTERNAL ASSERT FAILED at "../c10/cuda/CUDAStream.cpp":290, please report a bug to PyTorch. Unrecognized stream stream 5 on device cuda:0 (I didn't recognize the stream type, PRIORITY 2 with the value )
```
Note that "I didn't recognize the stream type, PRIORITY 2 with the value" is the output, not my comment
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ptrblck @msaroufim | module: cuda,triaged,actionable | low | Critical |
2,552,750,849 | ui | [bug]: NavigationMenuTrigger - Next.js: Error: React.Children.only expected to receive a single React element child | ### Describe the bug
When you use the `asChild` prop and pass your child to the `<NavigationMenuTrigger />` component I am receiving the `Next.js: Error: React.Children.only expected to receive a single React element child` error even though I am only passing one child, for example, the following triggers the error (as you can see there is only one child):
```ts
<NavigationMenuTrigger asChild>
<Button variant="ghost" className="relative h-10 w-10 rounded-full">
<Avatar className="h-10 w-10">
<AvatarImage src={image} alt="Avatar" />
<AvatarFallback>{initials ? initials : "U"}</AvatarFallback>
</Avatar>
</Button>
</NavigationMenuTrigger>
```
I have checked the source of what is added when I run `npx shadcn@latest navigation-menu` and I have located the issue to be the fact that the current form of the element doesn't take take the `asChild` prop into account when rendering children, as you can see here:
```ts
<NavigationMenuPrimitive.Trigger
ref={ref}
className={cn(navigationMenuTriggerStyle(), "group", className)}
{...props}
>
{children}{" "}
<ChevronDown
className="relative top-[1px] ml-1 h-3 w-3 transition duration-200 group-data-[state=open]:rotate-180"
aria-hidden="true"
/>
</NavigationMenuPrimitive.Trigger>
```
I have fixed this by conditionally rendering the chevron icon as follows:
```ts
<NavigationMenuPrimitive.Trigger
ref={ref}
className={cn(navigationMenuTriggerStyle(), "group", className)}
{...props}
>
{!props.asChild ? (
<>
{children}{" "}
<ChevronDown
className="relative top-[1px] ml-1 h-3 w-3 transition duration-200 group-data-[state=open]:rotate-180"
aria-hidden="true"
/>
</>
) : (
children
)}
</NavigationMenuPrimitive.Trigger>
```
### Affected component/components
NavigationMenuTrigger
### How to reproduce
1. Setup a `<NavigationMenuItem />` component
2. Add a `<NavigationMenuTrigger asChild>` component
3. Add a single child to the `<NavigationMenuTrigger asChild>` component (as above)
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Don't think this is relevant as the issue is pretty standard, I have a fix and will make a PR
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,552,764,708 | ui | [feat]: add a notice about outdated figma or remove the link completely | ### Feature description
The Figma link provided in Shadcn UI's website is completely outdated, last update was two years ago. Would be good to add a notice that its outdated or at least remove it completely from the site to not cause any issues.
### Affected component/components
_No response_
### Additional Context
_No response_
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,552,783,500 | opencv | Gemm layer handles inputs and blobs incorrectly with the new DNN engine | ### System Information
Platform: any
Reference: https://github.com/opencv/opencv/pull/26056
### Detailed description
TODO: With the new parser the function should be smart enough to figure out the operation mode from the number of 'inputs' and number of 'blobs'. Note, however, that 'inputs' may not be set yet in the constructor.
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Minor |
2,552,809,596 | kubernetes | Add trace provide to device-plugin and dra in invoke gRPC server | ### What would you like to be added?
Add trace provide to device-plugin and dra in invoke gRPC server
### Why is this needed?
We can view the gRPC process called in the trace uiใ | sig/node,kind/feature,lifecycle/rotten,needs-triage | low | Major |
2,552,866,601 | rust | [DESIGN BUG] declarative macros lack of neat way to simulate lookahead within rust grammer syntax `const X: Y` | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
macro match and echo non `const X: Y` pattern is fine.
```rust
macro_rules! echo1 {
(pub type $ident:ident<$($gi:ident),*> = $($tt:tt)*) => {
pub type $ident<$($gi),*> = $($tt)*;
};
}
echo1!(pub type Foo1<T, N> = (T, N));
```
But within current rust desgin, to let declarative macro match lookahead within rust grammer syntax `const X: Y`
- or input rust code, and do lookahead simulation
- or input lookahead free dsl, and output lookahead code
```rust
macro_rules! echo2 {
(@derive_foo pub type $ident:ident<$($gi:ident $(lookahead_qualifier=$gq:tt)? $(: $gt:ty)?),*> = $($tt:tt)*) => {
pub type $ident<$($($gq)? $gi $(: $gt)?),*> = $($tt)*;
};
(@derive_bar pub type $ident:ident<$($gi:ident $(lookahead_qualifier=$gq:tt)? $(: $gt:ty)?),*> = $($tt:tt)*) => {
pub type $ident<$($($gq)? $gi $(: $gt)?),*> = $($tt)*;
};
(@lookahead_workaround pub type $ident:ident<$($gi:ident $(lookahead_qualifier=$gq:tt)? $(: $gt:ty)?),*> = $($tt:tt)*) => {
pub type $ident<$($($gq)? $gi $(: $gt)?),*> = $($tt)*;
};
(pub type $ident:ident<$($(const)? $gi:ident $(: $gt:ty)?),*> = $($tt:tt)*) => {
pub type $ident<$($(const)? $gi $(: $gt)?),*> = $($tt)*;
};
}
// // TODO: https://github.com/rust-lang/rust/issues/130928
// echo2!(pub type Foo2<T, const N: usize> = [T; N]);
echo2!(@lookahead_workaround pub type Foo2<T, N lookahead_qualifier=const: usize> = [T; N]);
```
I expected to see this happen:
- Neat way to match lookahead within rust grammer syntax `const X: Y`
Instead, this happened:
- Need to simulate lookahead within rust grammer syntax `const X: Y`
- Leads to ugly simulation code.
- Thus I think it's a design bug.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (9b72238eb 2024-09-14)
binary: rustc
commit-hash: 9b72238eb813e9d06e9e9d270168512fbffd7ee7
commit-date: 2024-09-14
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
<backtrace>
```
</p>
</details>
## related
Also found similar issue back to 2021 in `pin-project-lite`.
So, I guess there is no way yet.
https://github.com/taiki-e/pin-project-lite/issues/62
| A-macros,T-lang,C-discussion | low | Critical |
2,552,905,491 | godot | Input.joy_connection_changed bug on disconnect joypad. | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Windows 10, v4.3.stable.official [77dcf97d8], Vulkan
### Issue description
When connecting and disconnecting joypads, information is transmitted via a signal to some function and the Array `Input.get_connected_joypads()` is also formed.
When 2 joypads are connected in series (!), each one triggers the signal `connected = true`.
When disconnecting joypads in series (!), the return signal `disconnected = true` **does not** trigger, if the first joypad#1 on `device 0` was disconnected first.
Thus, Joypad#2 on `device1` remains in `Input.get_connected_joypads()`, while all joypads are disconnected.
The description says that the signal is emitted every time a connection or disconnection occurs, which does not happen.
### Steps to reproduce
1) Create a simple 2D scene.
2) Attach a script with the code:
```
extends Node2D
func _ready() -> void:
Input.joy_connection_changed.connect(_on_joy_connection_changed)
func _on_joy_connection_changed(device: int, connected: bool):
if connected:
prints("Joypad Array", Input.get_connected_joypads())
else:
prints("Joypad Array", Input.get_connected_joypads())
```
3) Run the scene and start connecting joypads
4) Then when we see Joypad Array [0, 1] in the console:
- a) Disconnect in reverse order - joypad#2, then joypad#1. Everything works, Joypad Array [] all joypads are excluded.
- b) Connect back, first joypad#1 then joypad#2. After that, first disconnect joypad#1 (`device0`) then disconnect joypad#2 (`device1`). Joypad Array [1] all joypads are excluded. The list of joypads should be Joypad Array [].
5) If after this you connect any joypad that gets to device0, then the `disconnected = true` signal is triggered and immediately `connected = true` is triggered. The list of devices becomes Joypad Array [0].
### Minimal reproduction project (MRP)
[Uploading Input bug Project.zipโฆ]() | bug,topic:input | low | Critical |
2,552,913,262 | PowerToys | High amount of quick accent options results in some being rendered outside of the screen range | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update, Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Quick Accent
### Steps to reproduce
Use a vertical monitor / monitor orientation
Start typing on said vertical monitor or monitor with the vertical orientation
Initiate quick accent on the letter E or any other letter with a high amount of quick accent options
### โ๏ธ Expected Behavior
Wrap-over to new line or shifting of the shown range
### โ Actual Behavior

Outer-more elements go off the screen
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Major |
2,552,924,645 | godot | 2D contact points at incorrect position, incorrect body interaction | ### Tested versions
Reproducible in 3.5.x , 3.6
### System information
Windows 10 - v3.5.3.stable.mono.official [6c814135b] .NET
### Issue description
The interaction between 2D bodies is occasionally wrong, bodies orientation get stuck at unnatural angles to the ground that should be unstable. Seem to be connected with the fact that contact points are sometimes calculated at wrong positions, well away from the bodies edges where they could touch.

RigidBody2D (in red) stuck at weird angle to a StaticBody2D (light blue), at the same time sliding freely across its surface. Blue crosses mark contact points, as reported for the rigidbody by Physics2DDirectBodyState . Wrong contact point seem to be placed at one of the vertices of the static polygon.
Normally contact points should appear where bodies edges are actually touching, and the rigidbody would fall counter clockwise to rest on its wider edge.
### Steps to reproduce
The issue observed when a RigidBody2D (polygon) slide into contact with a complex shaped StaticBody2D (polygon). Specific conditions unknown, but highly reproducible in certain situations, as in the attached project.
### Minimal reproduction project (MRP)
[testcollision.zip](https://github.com/user-attachments/files/17164303/testcollision.zip)
| bug,topic:physics,topic:2d | low | Minor |
2,553,020,988 | storybook | [Bug]: Importing type from another Vue component breaks storybook | ### Describe the bug
I encounter the following error when importing a type from another Vue component in a TypeScript Vue 3 setup:
```
15:26:47 [vite] Internal server error: Unexpected token, expected "</>/<=/>=" (2:10)
Plugin: storybook:vue-docgen-plugin
File: /home/projects/github-6e1nvm/src/stories/Issue.vue:8:7
1 | import { defineComponent as _defineComponent } from "vue";
| ^
2 | import Button from "./Button.vue";
3 | const _sfc_main = /* @__PURE__ */ _defineComponent({
at constructor (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:362:19)
at TypeScriptParserMixin.raise (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:3260:19)
at TypeScriptParserMixin.unexpected (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:3280:16)
at TypeScriptParserMixin.expect (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:3590:12)
at TypeScriptParserMixin.tsParseTypeAssertion (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:8447:10)
at TypeScriptParserMixin.parseMaybeUnary (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:9475:19)
at TypeScriptParserMixin.parseMaybeUnaryOrPrivate (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:10403:61)
at TypeScriptParserMixin.parseExprOps (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:10408:23)
at TypeScriptParserMixin.parseMaybeConditional (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:10385:23)
at TypeScriptParserMixin.parseMaybeAssign (file:///home/projects/github-6e1nvm/node_modules/@babel/parser/lib/index.js#cjs:10348:21)
15:26:47 [vite] Pre-transform error: Unexpected token, expected "</>/<=/>=" (2:10)
```
I created a `Button.vue` component with a `ButtonProps` type exported:
```ts
export type ButtonProps = {
/**
* The label of the button
*/
label: string;
/**
* primary or secondary button
*/
primary?: boolean;
/**
* size of the button
*/
size?: 'small' | 'medium' | 'large';
/**
* background color of the button
*/
backgroundColor?: string;
};
```
I created another component (`Issue.vue`) and tried to import `ButtonProps` using `import type`:
```ts
import Button from './Button.vue';
import type { ButtonProps } from './Button.vue';
const props = withDefaults(defineProps<ButtonProps>(), { primary: false });
```
I am not sure why this error occurs, and any guidance or help would be greatly appreciated. If it's a known issue with `vue-docgen-plugin`, please let me know of a potential workaround or fix.
### Reproduction link
https://stackblitz.com/edit/github-6e1nvm?file=src%2Fstories%2FIssue.vue
### Reproduction steps
1. Go to above link
2. Hover on the sidebar: this raises the error overlay.
3. Click on Issue/Docs: the component failed to render properly ("Failed to fetch dynamically imported module: [...]/src/stories/Issue.stories.ts")
### System
```bash
Storybook Environment Info:
System:
OS: Linux 5.0 undefined
CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Shell: 1.0 - /bin/jsh
Binaries:
Node: 18.20.3 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.2.3 - /usr/local/bin/npm <----- active
pnpm: 8.15.6 - /usr/local/bin/pnpm
npmPackages:
@storybook/addon-essentials: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/addon-interactions: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/addon-onboarding: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/blocks: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/test: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/vue3: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/vue3-vite: ^8.4.0-alpha.1 => 8.4.0-alpha.1
storybook: ^8.4.0-alpha.1 => 8.4.0-alpha.1
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,553,040,075 | material-ui | [tabs] Add the ability to modify a tab via context | ### Summary
Currently, MUI only allows retrieving the value of a tab through context, but it does not support the ability to modify or manage tabs dynamically via context. It would be beneficial to extend the functionality by adding the ability to modify the current tab value and other tab parameters through a context API.
### Examples
An example implementation could reference the Material Design specification, which outlines the possibility of dynamic component management via contexts. This would enable the creation of more flexible and interactive interfaces where tabs can be changed based on different conditions in the application.
```
<TabContext value={value} setValue={setValue}>
<Tabs>
...
</Tabs>
// Inside the component, you can get setValue through the context
<TabPanel>
...
</TabPanel>
</TabContext >
```
**Search keywords**: tabs, context | new feature,waiting for ๐,component: tabs | low | Minor |
2,553,103,789 | vscode | SCM - Going up past the start of a commit message goes into history and sometimes loses the commit message you were typing | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.85.2
- OS Version: Windows 11
Steps to Reproduce:
1. click on source control on the bar on the left side that causes it to scroll
2. start typing a long commit message, then hit up arrow to go to the top, but overshoot and now it is going back in history
3. going back down sometimes it brings back what you were typing, but often it does not and you have to start over :(
I *really* want to disable this behavior so that VSCode will NOT scroll back to previous commit messages just by hitting the up arrow
| scm,under-discussion | low | Critical |
2,553,137,520 | ui | [bug]:Installation error, prompt read ECONNRESET | ### Describe the bug
Failed to run the installation using the latest documentation for the Next.js project.
npx shadcn@latest init
### Affected component/components
failing to setup the cli
### How to reproduce
node 20.9.0
npm 10.1.0
Created a Next.js project using the command `npx create-next-app@latest`, and then executed `npx shadcn@latest init` in the Next.js directory.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
D:\next_Learn> npx shadcn@latest init
โ Preflight checks.
โ Verifying framework. Found Next.js.
โ Validating Tailwind CSS.
โ Validating import alias.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
request to https://ui.shadcn.com/r/styles/index.json failed, reason: read ECONNRESET
```
### System Info
```bash
window 11 x64
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,553,174,786 | pytorch | ONNX export: torch.onnx.errors.SymbolicValueError: Unsupported prim::Constant kind: 'ival' | ### ๐ Describe the bug
Hi!
Trying to export a model of mine to ONNX with a sub-dependency ([torchsde](https://github.com/google-research/torchsde)) that makes use of [torch Generators](https://pytorch.org/docs/stable/generated/torch.Generator.html#torch.Generator). I can't upload the model code but here is an MRE that triggers the same error:
```py
import torch
def _randn(size: list[int], device: torch.device):
generator = torch.Generator(device=device)
return torch.randn(size, device=device, generator=generator)
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
def forward(self, x):
val = _randn(x.shape, torch.device("cpu"))
return val
m = MyModule()
torch.onnx.export(MyModule(), torch.randn(1, 2), 'test.onnx')
```
# Observed behaviour
Running this outputs:
```
$ python issue.py
Traceback (most recent call last):
File "/home/myrepo/issue.py", line 16, in <module>
torch.onnx.export(MyModule(), torch.randn(1, 2), 'test.onnx')
File "/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 551, in export
_export(
File "/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1648, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1174, in _model_to_graph
graph = _optimize_graph(
File "/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 714, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1997, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/symbolic_opset9.py", line 6952, in prim_constant
raise errors.SymbolicValueError(
torch.onnx.errors.SymbolicValueError: Unsupported prim::Constant kind: 'ival'. Please send a bug report at https://github.com/pytorch/pytorch/issues. [Caused by the value '26 defined in (%26 : Generator = prim::Constant[value=torch.Generator(device="cpu", seed=67280421310721)](), scope: __main__.MyModule:: # /home/myrepo/issue.py:5:0
)' (type 'Generator') in the TorchScript graph. The containing node has kind 'prim::Constant'.]
(node defined in /home/myrepo/issue.py(5): _randn
/home/myrepo/issue.py(12): forward
/home/myrepo/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/home/myrepo/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py(1562): _call_impl
/home/myrepo/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/home/myrepo/.venv/lib/python3.10/site-packages/torch/jit/_trace.py(132): wrapper
/home/myrepo/.venv/lib/python3.10/site-packages/torch/jit/_trace.py(141): forward
/home/myrepo/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py(1562): _call_impl
/home/myrepo/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/home/myrepo/.venv/lib/python3.10/site-packages/torch/jit/_trace.py(1497): _get_trace_graph
/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py(950): _trace_and_get_graph_from_model
/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py(1046): _create_jit_graph
/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py(1170): _model_to_graph
/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py(1648): _export
/home/myrepo/.venv/lib/python3.10/site-packages/torch/onnx/utils.py(551): export
/home/myrepo/issue.py(16): <module>
)
Inputs:
Empty
Outputs:
#0: 26 defined in (%26 : Generator = prim::Constant[value=torch.Generator(device="cpu", seed=67280421310721)](), scope: __main__.MyModule:: # /home/myrepo/issue.py:5:0
) (type 'Generator')
```
And here I am ๐ฆ I would have assumed that `torch.randn` uses a Generator under the hood as well but not explicitly passing a Generator to `randn` allows the model to successfully be exported.
Any help with how to work around this issue is greatly appreciated!
# Expected behaviour
Model successfully exported to disk as `test.onnx`.
### Versions
```
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.10.0-32-cloud-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] alias-free-torch==0.0.6
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] ema-pytorch==0.2.3
[pip3] numpy==1.23.5
[pip3] onnx==1.16.2
[pip3] onnxscript==0.1.0.dev20240926
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.1
[pip3] torch-stoi==0.2.1
[pip3] torchaudio==2.4.1
[pip3] torchdiffeq==0.2.4
[pip3] torchlibrosa==0.1.0
[pip3] torchmetrics==1.4.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.19.1
[pip3] triton==3.0.0
[pip3] v-diffusion-pytorch==0.0.2
[pip3] vector-quantize-pytorch==1.17.3
[conda] numpy 1.23.5 pypi_0 pypi
``` | module: onnx,triaged | low | Critical |
2,553,188,551 | pytorch | Error when calling multiple backward passes on FSDP model | ### ๐ Describe the bug
When trying to add FSDP to our training code base that includes a pipelining scheme I encountered an issue if forward and backward passes are no longer interleaved but instead multiple backward passes directly follow each other. I was able to reproduce this in a minimal setup:
```python
import os
import torch
from torch.distributed import fsdp
import torch.multiprocessing as mp
def run(rank, world_size):
torch.cuda.set_device(rank)
torch.distributed.init_process_group(
world_size=world_size,
rank=rank,
)
ffn = torch.nn.Sequential(torch.nn.Linear(10, 10), torch.nn.Linear(10, 10))
ffn = fsdp.FullyShardedDataParallel(ffn, device_id=rank)
x1 = torch.rand((10, 10)).cuda()
loss1 = ffn(x1).sum()
x2 = torch.rand((10, 10)).cuda()
loss2 = ffn(x2).sum()
loss1.backward()
#ffn._handle._needs_pre_backward_unshard = True
loss2.backward()
if __name__ == "__main__":
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
world_size = 2
mp.spawn(
run,
args=(world_size,),
nprocs=world_size,
join=True,
)
```
The error I observe is:
```
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "*/pytorch/torch/multiprocessing/spawn.py", line 90, in _wrap
fn(i, *args)
File "torch_fsdp_pp.py", line 26, in run
loss2.backward()
File "*/pytorch/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "*/pytorch/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "*/pytorch/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: setStorage: sizes [10, 10], strides [10, 1], storage offset 110, and itemsize 4 requiring a storage size of 840 are out of bounds for storage of size 0
```
In my understanding the parameters are not unsharded in the second call to `backward()`. I was able to enforce this by manually setting `ffn._handle._needs_pre_backward_unshard = True` as indicated in the sample code. In this case the sample passes.
Is there another way to resolve this issue? I was not able to find any indication for a solution in `torch.distributed.pipelining`
### Versions
PyTorch version: 2.6.0a0 <- current main branch
CUDA used to build PyTorch: 12.5
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang | triaged,module: fsdp | low | Critical |
2,553,196,972 | yt-dlp | Add support for Podia | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
usa
### Example URLs
- Single video: https://fbabossacademy.podia.com/view/courses/f919ed3c-7e6e-47d0-9fb0-bc6b4f69bdb9/2443053-unit-1-getting-started/8830114-introduction-to-unit-1
### Provide a description that is worded well enough to be understood
Podia is a competitor to teachable. They allow users to upload paid video courses which end up as a list of videos you can watch and track progress.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-u', 'PRIVATE', '-vU', 'https://fbabossacademy.podia.com/view/courses/f919ed3c-7e6e-47d0-9fb0-bc6b4f69bdb9/2443053-unit-1-getting-started/8830114-introduction-to-unit-1']
Type account password and press [Return]:
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (pip)
[debug] Python 3.12.6 (CPython arm64 64bit) - macOS-15.0-arm64-arm-64bit (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2 (setts), ffprobe 7.0.2
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://fbabossacademy.podia.com/view/courses/f919ed3c-7e6e-47d0-9fb0-bc6b4f69bdb9/2443053-unit-1-getting-started/8830114-introduction-to-unit-1
[generic] 8830114-introduction-to-unit-1: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 8830114-introduction-to-unit-1: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://fbabossacademy.podia.com/view/courses/f919ed3c-7e6e-47d0-9fb0-bc6b4f69bdb9/2443053-unit-1-getting-started/8830114-introduction-to-unit-1
Traceback (most recent call last):
File "/opt/homebrew/Cellar/yt-dlp/2024.8.6/libexec/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/yt-dlp/2024.8.6/libexec/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/yt-dlp/2024.8.6/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 740, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/yt-dlp/2024.8.6/libexec/lib/python3.12/site-packages/yt_dlp/extractor/generic.py", line 2526, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://fbabossacademy.podia.com/view/courses/f919ed3c-7e6e-47d0-9fb0-bc6b4f69bdb9/2443053-unit-1-getting-started/8830114-introduction-to-unit-1
```
| site-request,account-needed,triage,can-share-account | low | Critical |
2,553,241,395 | pytorch | fused_scaled_matmul_reduce_scatter report error with channel-wise scaling | ### ๐ Describe the bug
fused_scaled_matmul_reduce_scatter works for scalar scale but not for channel-wise scale.
A minimal repro: https://gist.github.com/donglinz/9d8cb3ec7f3b6bfb6b4a7d9402c32a60
```
python gemm_ag_rs.py --task gemm-rs --tp-size 8
```
Output:
```
RuntimeError: Invalid scaling configuration. For TensorWise scaling, both scales should be scalar. For RowWise scaling, scale_a should be (1024, 1) and scale_b should be (1, 8192). Got scale_a.size()=(8192, 1) and scale_b.size()=(1, 8192)
```
Looks very much like scale_a is not being sharded accordingly.
https://github.com/pytorch/pytorch/blob/9d72f7481b5f58bb74209187c69942ec42510274/torch/distributed/_symmetric_memory/__init__.py#L581
Then I tried to shard it with this:
```
def chunk_producer(rank: int, out: torch.Tensor) -> None:
if scale_a_shards:
mm_out_op(shards[rank], B, scale_a=scale_a_shards[rank], **kwargs, out=out)
else:
mm_out_op(shards[rank], B, **kwargs, out=out)
```
However another error occured in deeper torch components which seems like buffer allocation issue.
```
RuntimeError: setStorage: sizes [8192, 1024], strides [1, 8192], storage offset 8388608, and itemsize 2 requiring a storage size of 33554432 are out of bounds for storage of size 25165824
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20240925+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:38:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4800.05
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-protobuf==3.5.0
[pip3] numpy==1.26.4
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.6.0.dev20240925+cu124
[pip3] torchaudio==2.5.0.dev20240926+cu124
[pip3] torchvision==0.20.0.dev20240926+cu124
[pip3] triton==3.0.0
[conda] blas 1.0 mkl conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-cuda 12.4 hc786d27_6 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] pytorch-triton 3.1.0+5fe38ffd73 pypi_0 pypi
[conda] torch 2.6.0.dev20240925+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20240926+cu124 pypi_0 pypi
[conda] torchtriton 3.0.0+45fff310c8 py312 pytorch-nightly
[conda] torchvision 0.20.0.dev20240926+cu124 pypi_0 pypi
```
Could anyone please look into this? Thank you!
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,553,252,098 | godot | Animation Player FPS shows rounded integer but still internally stores float converted from Seconds | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8]
- Not reproducible in: v4.2.1.stable.official [b09f793f5]
### System information
Godot v4.3.stable - Ubuntu 22.04.4 LTS 22.04 - X11 - GLES3 (Compatibility) - NVIDIA GeForce GTX 860M (nvidia; 535.183.01) - Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz (8 Threads)
### Issue description
**Context**
I am working with fractional FPS for animations, since I set frame time in ms (in Aseprite) then calculate the corresponding FPS for an Animated Sprite (1000/frame duration in ms), then use a converter ([Animated Sprite to Animation Player Convertor for Godot 4.0](https://godotengine.org/asset-library/asset/1605)) from Animated Sprite to Animation Player animation. This leads to FPS such as 12.5 FPS for 80ms frames.
So far in Godot v4.2.1 I could enter either 12.5 FPS or 80ms at the bottom of the Animation Player to get the snapping I wanted:

**Issue**
Since Godot 4.3, the FPS field only allows entering an integer. Entering a float will round it and display the rounded integer. This leads to imperfect FPS such as 13 instead of 12.5, leading to snapping between frames (observe the vertical blue bar not being snapped to the start of a sprite square preview):


I need to switch back to Seconds, then enter 0.08 and then switch back to FPS to get 12.5 FPS internally:


However, note that the rounded value 13 FPS is still displayed:

although snapping shows we are really still at 12.5 FPS / 80ms:

This is confusing, and furthermore, trying to re-enter 13 FPS manually will not change the value to 13. However, entering a different value like 14, then the old value 13 again *will* force refresh to 13 instead of 12.5.
**Fix suggestion**
Revert to showing fractional FPS as before, since they are still stored internally as such as indirectly accessibly via setting Seconds, but this requires an extra step for the user.
Currently there is not even an up/down arrow widget to increase/decrease FPS by 1, so there is no big advantage in showing integers anyway (and we could still add an up/down arrow for people who really want to use integers if we want to).
### Steps to reproduce
- Create an Animation Player node with a dummy animation track (the easiest to test timeline snapping is to manually increase the animation duration near the clock icon at the top-right)
- In the Animation Player panel, try to change FPS to a fractional value like 12.5 => rounded to 13
- Move the timeline vertical bar around to test snapping. Add some keyframes there to remember the positions.
- Switch to Seconds, enter 0.08, switch back to FPS => see 13 but internally it's 12.5
- Move the timeline vertical bar around to test snapping. See how it ends at different positions that the previous keyframes.
- Enter 14 FPS, confirm, then 13 again, confirm (to force value refresh).
- Move the timeline vertical bar around to test snapping. See how it's now snapping to the previous keyframes.
### Minimal reproduction project (MRP)
N/A | enhancement,topic:animation | low | Major |
2,553,264,958 | godot | centerContainer anchor preset doesn't actually center the first time you select the center option | ### Tested versions
- Reproducible in: Godot_v4.2.2-stable_win64
### System information
Windows 11 - Godot v4.2.2.stable - Forward+ - dedicated
### Issue description
Adding a centerContainer to a Control Node as a child and then setting centerContainer's layout mode to Anchors and Anchors Preset to Center will not fully center the layout. The control node is the child of another node as well.
### Steps to reproduce
1. Add a Control Node as a child of another node.
2. Add a centerContainer as a child of the Control node.
3. Set Control node anchor preset to full rect
4. set centerContainer layout mode to anchors
5. set centerContainer anchors preset to center.
You will then see that centerContainer is not actually center and the way to fix this is to select its anchors preset and set it to H Center Wide and then set it back to center
### Minimal reproduction project (MRP)
[main.zip](https://github.com/user-attachments/files/17166694/main.zip)
| bug,topic:editor,topic:gui | low | Minor |
2,553,294,433 | go | x/net/quic: TestUDPSourceSpecified/udp4/udp/unspec failures | ```
#!watchflakes
default <- pkg == "golang.org/x/net/quic" && test == "TestUDPSourceSpecified/udp4/udp/unspec"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8737054222078650913)):
=== RUN TestUDPSourceSpecified/udp4/udp/unspec
panic: test timed out after 10m0s
running tests:
TestUDPSourceSpecified (10m0s)
TestUDPSourceSpecified/udp4/udp/unspec (10m0s)
goroutine 1742 [running]:
testing.(*M).startAlarm.func1()
/Volumes/Work/s/w/ir/x/w/goroot/src/testing/testing.go:2366 +0x30c
created by time.goFunc
...
goroutine 1770 [chan receive, 9 minutes]:
golang.org/x/net/quic.TestUDPSourceSpecified.func1(0x1400012d380, {0x140001c9a40, 0x140001c9b00, {{{0x0, 0xffff7f000001}, 0x140001180a8}, 0xc3fe}, 0x1400001f140})
/Volumes/Work/s/w/ir/x/w/targetrepo675238800/quic/udp_test.go:47 +0x174
golang.org/x/net/quic.runUDPTest.func1(0x1400012d380)
/Volumes/Work/s/w/ir/x/w/targetrepo675238800/quic/udp_test.go:180 +0x454
testing.tRunner(0x1400012d380, 0x140001184c8)
/Volumes/Work/s/w/ir/x/w/goroot/src/testing/testing.go:1689 +0xec
created by testing.(*T).Run in goroutine 1769
/Volumes/Work/s/w/ir/x/w/goroot/src/testing/testing.go:1742 +0x318
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,553,296,724 | go | x/net/http2: TestServer_Rejects_Too_Many_Streams failures | ```
#!watchflakes
default <- pkg == "golang.org/x/net/http2" && test == "TestServer_Rejects_Too_Many_Streams"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735802936484063345)):
=== RUN TestServer_Rejects_Too_Many_Streams
server_test.go:2292: got stream ID 3, want 1
panic: test timed out after 20m0s
running tests:
TestServer_Rejects_Too_Many_Streams (19m57s)
goroutine 1754 [running]:
testing.(*M).startAlarm.func1()
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:2456 +0x4d8
created by time.goFunc
...
goroutine 1230 [select, 19 minutes]:
golang.org/x/net/http2.(*serverConn).serve(0xc0003db880)
/home/swarming/.swarming/w/ir/x/w/targetrepo2947865698/http2/server.go:985 +0xcf0
golang.org/x/net/http2.(*Server).serveConn(0xc0005148a0, {0x6ec108, 0xc0000a9860}, 0xc0001a1f50, 0xc0001a1fb0)
/home/swarming/.swarming/w/ir/x/w/targetrepo2947865698/http2/server.go:578 +0x1400
golang.org/x/net/http2.newServerTester.func3()
/home/swarming/.swarming/w/ir/x/w/targetrepo2947865698/http2/server_test.go:204 +0x1bc
created by golang.org/x/net/http2.newServerTester in goroutine 1229
/home/swarming/.swarming/w/ir/x/w/targetrepo2947865698/http2/server_test.go:202 +0xe0c
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,553,296,757 | go | x/vulndb/internal/symbols: TestPatchedSymbols fails unless it is run inside a git repository checkout | ```
#!watchflakes
default <- pkg == "golang.org/x/vulndb/internal/symbols" && test == "TestPatchedSymbols"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735794537960806193)):
=== RUN TestPatchedSymbols
patched_functions_test.go:42: lstat testdata/module: no such file or directory
patched_functions_test.go:46: lstat testdata/fixed-module: no such file or directory
patched_functions_test.go:54: (-got, want+):
ย ย map[symbols.symKey]bool{
+ย {pkg: "golang.org/module", symbol: "Foo"}: true,
+ย {pkg: "golang.org/module/internal", symbol: "Bar"}: true,
ย ย }
patched_functions_test.go:42: lstat testdata/module: no such file or directory
patched_functions_test.go:46: lstat testdata/fixed-module: no such file or directory
patched_functions_test.go:54: (-got, want+):
ย ย map[symbols.symKey]bool{
+ย {pkg: "golang.org/nestedmodule", file: "main_linux.go", symbol: "main"}: true,
ย ย }
--- FAIL: TestPatchedSymbols (0.00s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,vulncheck or vulndb | low | Critical |
2,553,378,019 | go | go/types: position-independent type checking | The following go/types APIs all use Pos in semantically significant ways:
```
Scope.LookupParent(name, pos)
Scope.Contains(pos)
Scope.Innermost(pos)
CheckExpr(fset, pkg, pos, expr, info)
Eval(fset, pkg, pos)
```
As the doc comment on Innermost says, "The result is guaranteed to be valid only if the type-checked AST has complete position information." This restriction applies equally to all these operations, and should probably be made explicit for all of them.
For example, Scope.LookupParent uses the position of the reference to compute the set of declarations that are in scope. This assumes the position information is accurate, which it is for trees produced by the parser, but not for ones that have been modified by refactoring algorithms or synthesized directly. It should be possible to type-check any syntax tree correctly, even without accurate position information.
In each case, the position is used as a shorthand to indicate the portion of the environment that is accessible at a given point. It would be possible to provide parallel APIs for these 5 functions, without the restriction, that replaces pos with a different parameter that indicates the environment. For example, it could be something like an []ast.Node indicating a path from the root of the tree to the designated node.
Internally, the tree of Scopes could record the environment in a form that is independent of position, similar to my lazy type checker, which mapped each Node to a pair of a Scope and an int that represents the length of the prefix of Scope symbols that are in scope at that point. Ignoring efficiency concerns, I imagine the existing Pos-based functions could become wrappers around a function that resolves a Pos to a []Node (assuming valid pos info), followed by a call to the []Node-based API.
This would open the door to composable refactorings that mutate the tree in a sequence of passes, invalidating position info but still allowing the type checker to be reinvoked after each mutation, before finally formatting the tree to produce the updated output. There are still many open questions about how to build such refactorings: we rely heavily on pos for debugging and error reporting; re-typechecking may require additional packages that were not needed before; and so on. But it is both viable and desirable to make the type checker itself fully independent of syntax positions, so they are used only for error messages, or passed through to Object.Pos, but have no semantics.
| NeedsInvestigation | low | Critical |
2,553,389,535 | vscode | Fix warnings in default keybindings file | Windows (might not matter):

| debt,notebook | low | Minor |
2,553,394,972 | rust | Outlives requirements are not implied in the return type of `async fn`, and generally for RPIT | Split from #102682
The following doesn't compile:
```rust
trait MyTrait<T> {
async fn foo(&self) -> &T;
}
```
it gives the following:
```
error[E0311]: the parameter type `T` may not live long enough
--> <source>:2:5
|
2 | async fn foo(&self) -> &T;
| ^^^^^^^^^^^^^-^^^^^^^^^^^^
| | |
| | the parameter type `T` must be valid for the anonymous lifetime as defined here...
| ...so that the reference type `&T` does not outlive the data it points at
|
help: consider adding an explicit lifetime bound
|
2 | async fn foo<'a>(&'a self) -> &'a T where T: 'a;
| ++++ ++ ++ +++++++++++
```
of course, a normal function is fine. We just don't imply that `T: '_` here because it lowers to `impl Future<Output = &'_ T>` | A-impl-trait,C-bug,A-async-await,AsyncAwait-Triaged,T-types,A-implied-bounds | low | Critical |
2,553,397,637 | rust | Decimal formatting for some floating point numbers does not round-to-even | I have discovered a class of `f64` for which `format!("{}", f)` rounds ties up to 3, rather than down to 2 as expected. Formatting those same numbers with fixed precision correctly rounds ties to even.
Here is some code that demonstrates the discrepancy:
```
fn main() {
let fs: [f64; 15] = [
112171935477118.12,
181681934355391.62,
-581170764721946.2,
131673329546813.62,
711901304153173.2,
91266877191838.12,
18406814307348.312,
-620793498637746.2,
-889345962364809.2,
-664193405104534.2,
-157386457699813.12,
566922914748074.2,
223437609802779.12,
945017102864441.2,
32461450983719.062,
];
for f in fs {
let no_precision = format!("{}", f);
let mut split = no_precision.split('.');
let _int = split.next();
let frac = split.next().unwrap();
let precision = frac.len();
println!("{} != {:.2$}", f, f, precision);
}
```
This program prints:
```
112171935477118.13 != 112171935477118.12
181681934355391.63 != 181681934355391.62
-581170764721946.3 != -581170764721946.2
131673329546813.63 != 131673329546813.62
711901304153173.3 != 711901304153173.2
91266877191838.13 != 91266877191838.12
18406814307348.313 != 18406814307348.312
-620793498637746.3 != -620793498637746.2
-889345962364809.3 != -889345962364809.2
-664193405104534.3 != -664193405104534.2
-157386457699813.13 != -157386457699813.12
566922914748074.3 != 566922914748074.2
223437609802779.13 != 223437609802779.12
945017102864441.3 != 945017102864441.2
32461450983719.063 != 32461450983719.062
```
### Meta
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (2bd1e894e 2024-09-26)
binary: rustc
commit-hash: 2bd1e894efde3b6be857ad345914a3b1cea51def
commit-date: 2024-09-26
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
| T-compiler,C-bug,T-libs,A-floating-point,A-fmt | low | Major |
2,553,410,384 | deno | WebGPU code works in browser but not in Deno, divergence from specification? | Version: deno 2.0.0-rc.6
The linked project
https://github.com/dezmou/SHA256-WebGPU
works in browser (latest Chrome stable on Windows), but doesn't work in Deno.
Deno fails in multiple ways:
Firstly
```
Device::create_shader_module error:
Shader '' parsing error: failed to convert expression to a concrete type: the concrete type `i32` cannot represent the abstract value `3144134277` accurately
โโ wgsl:183:20
โ
183 โ ctx.state[1] = 0xbb67ae85;
โ ^^^^^^^^^^ this expression has type {AbstractInt}
โ
= note: the expression should have been converted to have i32 scalar type
00000000000000000000000000000000
```
If fixed by appending u to the end of these constants, Deno fails again with
```
Device::create_shader_module error:
Shader validation error:
โโ :33:3
โ
33 โ fn EP0(x : u32) -> u32{return (ROTRIGHT(x,2) ^ ROTRIGHT(x,13) ^ ROTRIGHT(x,22));}
โ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
โ โ โ โ
โ โ โ naga::Expression [2]
โ โ invalid function call
โ naga::Function [5]
00000000000000000000000000000000
```
Deno shouldn't prevent code that works in browser from working in Deno.
Test was performed by concatenating sha256shader.js + ' \n' + sha256.js and the lines
const result = await sha256("");
console.log(result); | bug,upstream,webgpu | medium | Critical |
2,553,502,995 | flutter | Null check exception - pop on ShellRoute that was pushed by a StatefulShellRoute | ### What package does this bug report belong to?
go_router
### What target platforms are you seeing this bug on?
iOS
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
https://pastebin.com/xDKJyK9q
```
</details>
### Steps to reproduce
1. Have the first page with a StatefulShellRoute
2. Have the second page with a ShellRoute
3. Make push from the first page to the second
4. Make pop from second page
### Expected results
Return to first page
### Actual results
Null check exception
### Code sample
### The code is messy because I used dartpad with example codes from GoRouter itself
<details open><summary>Code sample</summary>
```dart
https://pastebin.com/UUCQSYWT
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.3, on macOS 15.0 24A335 darwin-arm64, locale pt-BR)
โข Flutter version 3.24.3 on channel stable at /Users/danielmessias/fvm/versions/3.24.3
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 2663184aa7 (2 weeks ago), 2024-09-11 16:27:48 -0500
โข Engine revision 36335019a8
โข Dart version 3.5.3
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/danielmessias/Library/Android/sdk
โข Platform android-34, build-tools 34.0.0
โข ANDROID_HOME = /Users/danielmessias/Library/Android/sdk
โข ANDROID_SDK_ROOT = /Users/danielmessias/Library/Android/sdk
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15F31d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[โ] VS Code (version 1.93.1)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.96.0
[โ] Connected device (5 available)
โข iPhone de Danael (mobile) โข 00008101-0012781C3EE2001E โข ios โข iOS 18.0 22A3354
โข iPhone SE (3rd generation) (mobile) โข 15C04484-8302-46BF-B695-832D37822F15 โข ios โข com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 15.0 24A335 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 15.0 24A335 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 129.0.6668.70
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| c: regression,package,a: error message,has reproducible steps,P1,p: go_router,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.26 | medium | Critical |
2,553,520,834 | vscode | Right side blue focus outline of chat is missing if we maximize the chat panel | 
right side blue outline of the missing if we maximize the chat panel | bug,ux,panel-chat | low | Major |
2,553,553,579 | rust | `gen fn` with lifetime issue yields nonsensical suggestion | ### Code
```rust
#![feature(gen_blocks)]
struct Value<'a>(&'a ());
struct Container<'a> {
x: Value<'a>,
}
impl<'a> Container<'a> {
gen fn f(&self) -> &'a Value {
yield &self.x
}
}
```
### Current output
```
error: lifetime may not live long enough
--> <source>:10:34
|
9 | impl<'a> Container<'a> {
| -- lifetime `'a` defined here
10 | gen fn f(&self) -> &'a Value {
| ______________-___________________^
| | |
| | let's call the lifetime of this reference `'1`
11 | | yield &self.x
12 | | }
| |_____^ method was supposed to return data with lifetime `'a` but it is returning data with lifetime `'1`
|
help: consider adding 'move' keyword before the nested closure
|
10 | gen fn f(&self) -> &'a Value move {
| ++++
```
### Desired output
```
<the `move` suggestion should not be given>
```
### Rationale and extra context
[Godbolt link](https://godbolt.org/z/a4sfrzTnh)
The correct suggestion would be to suggest `&'a Value<'a>` as `f`โs return type (or no suggestion at all).
Additionally, I donโt understand why this is an error in the first place? Given that `Self` contains lifetime `'a`, the elided lifetime of `&self` must clearly be outlived by `'a`? For reference, this works when directly returning `&'a Value` without a generator ([Godbolt link](https://godbolt.org/z/v8EfrqGfG)):
```rust
#![allow(unused)]
struct Value<'a>(&'a ());
struct Container<'a> {
x: Value<'a>,
}
impl<'a> Container<'a> {
fn f(&self) -> &'a Value {
&self.x
}
}
```
Tracking issue: #117078
### Other cases
_No response_
### Rust Version
```
rustc 1.83.0-nightly (2bd1e894e 2024-09-26)
binary: rustc
commit-hash: 2bd1e894efde3b6be857ad345914a3b1cea51def
commit-date: 2024-09-26
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,553,595,486 | rust | kind filters are insufficently documented | some of these, like `tymethod`, i don't even know what they do.
there's a list in the formal search syntax, but there's a lot of missing descriptions.
the "search tricks" inline help also *seems* to have an "exhaustive" list, but is actually missing a ton of kinds.
https://doc.rust-lang.org/nightly/rustdoc/read-documentation/search.html | T-rustdoc,A-docs,A-rustdoc-search,T-rustdoc-frontend | low | Minor |
2,553,596,714 | electron | [Bug]: Cannot kill utilityProcess right after creation | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.1.2
### What operating system(s) are you using?
macOS
### Operating System Version
macOS Sonoma 14.7
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
Utility processes can be killed immediately after starting them, e.g. in error handling logic like the following:
```js
const proc = utilityProcess.fork("utility.js")
try {
// Failing synchronous code
} catch (err) {
proc.kill()
}
```
### Actual Behavior
`UtilityProcess.kill()` will return `false`, indicating that killing the process was not successful. The process won't be killed.
### Testcase Gist URL
https://gist.github.com/nikwen/e459685745a8f17c13c38852499d430b
### Additional Information
CC @deepak1556 | platform/macOS,bug :beetle:,has-repro-gist,component/utilityProcess,32-x-y,33-x-y | low | Critical |
2,553,624,121 | tensorflow | Crash when calling TFSMLayer object during TF_lite conversion | ### 1. System information
- Linux Ubuntu 24.04
- TF 2.17.0 (pip) and Keras 3.5.0
### 2. Code
A TF model is first exported using Keras 3.5.0:
```model.export(model_name)```
Then when the following method is called below, the conversion crashes in TF 2.17.0 (log below). It works fine with TF 2.16.2.
```
def makeQuantizedTFmodel(A, dP):
import tensorflow as tf
A2 = tf.cast(A, tf.float32)
A = tf.data.Dataset.from_tensor_slices((A2)).batch(1)
def representative_dataset_gen():
for input_value in A.take(100):
yield[input_value]
import keras
model = keras.layers.TFSMLayer(model_name), call_endpoint='serve')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_quant_model = converter.convert()
with open(model_name+'.tflite', 'wb') as o:
o.write(tflite_quant_model)
```
The error with TF 2.17.0:
```
Traceback (most recent call last):
File "/home/nicola/test/DML/DataML.py", line 1008, in <module>
sys.exit(main())
^^^^^^
File "/home/nicola/test/DML/DataML.py", line 160, in main
train(sys.argv[2], None, None)
File "/home/nicola/test/DML/DataML.py", line 398, in train
makeQuantizedTFmodel(A, dP)
File "/home/nicola/test/DML/libDataML.py", line 231, in makeQuantizedTFmodel
tflite_quant_model = converter.convert()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/tensorflow/lite/python/lite.py", line 1231, in wrapper
return self._convert_and_export_metrics(convert_func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/tensorflow/lite/python/lite.py", line 1183, in _convert_and_export_metrics
result = convert_func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/tensorflow/lite/python/lite.py", line 1749, in convert
self._freeze_keras_model()
File "/usr/local/lib/python3.12/dist-packages/tensorflow/lite/python/convert_phase.py", line 215, in wrapper
raise error from None # Re-throws the exception.
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/tensorflow/lite/python/convert_phase.py", line 205, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/tensorflow/lite/python/lite.py", line 1690, in _freeze_keras_model
input_signature = _model_input_signature(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/tensorflow/lite/python/tflite_keras_util.py", line 119, in model_input_signature
input_specs = model._get_save_spec( # pylint: disable=protected-access
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'TFSMLayer' object has no attribute '_get_save_spec'. Did you mean: '_set_save_spec'?
```
| stat:awaiting tensorflower,comp:lite,TFLiteConverter,2.17 | low | Critical |
2,553,647,015 | rust | Stack overflow with `clashing_extern_declarations` | ### Code
```
#![warn(clashing_extern_declarations)]
#[repr(C)]
struct A<T> {
a: *const A<A<T>>,
t: T,
}
#[repr(C)]
struct B<T> {
b: *const B<B<T>>,
t: T,
}
pub mod x {
extern "C" {
pub fn bar(_: super::A<i32>);
}
}
pub mod y {
extern "C" {
pub fn bar(_: super::B<i32>);
//~^ WARN `bar` redeclared with a different signature
}
}
```
### Affected release channels
- [ ] Previous Stable
- [ ] Current Stable
- [ ] Current Beta
- [ ] Current Nightly
### Rust Version
stable and nightly
### Current error output
Not relevant
### Backtrace
```
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x32b0323)[0x7f20888aa323]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x14420)[0x7f20854a9420]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(_RNvMs3_NtNtCsfYN5AaPO7EC_12rustc_middle2ty7contextNtB5_13CtxtInterners9intern_ty+0x62)[0x7f2086a16112]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x48300d3)[0x7f2089e2a0d3]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x482ff4e)[0x7f2089e29f4e]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x483016a)[0x7f2089e2a16a]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(_RNvMsL_NtCsfYN5AaPO7EC_12rustc_middle2tyNtB5_8FieldDef2ty+0x14a)[0x7f2089e285ca]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb949)[0x7f208a5d5949]
### cycle encountered after 8 frames with period 10
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdbce9)[0x7f208a5d5ce9]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb98a)[0x7f208a5d598a]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdbce9)[0x7f208a5d5ce9]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb98a)[0x7f208a5d598a]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdbce9)[0x7f208a5d5ce9]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb98a)[0x7f208a5d598a]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdbce9)[0x7f208a5d5ce9]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb98a)[0x7f208a5d598a]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdbce9)[0x7f208a5d5ce9]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb98a)[0x7f208a5d598a]
### recursed 24 times
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdbce9)[0x7f208a5d5ce9]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb98a)[0x7f208a5d598a]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdbce9)[0x7f208a5d5ce9]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb98a)[0x7f208a5d598a]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdbce9)[0x7f208a5d5ce9]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb98a)[0x7f208a5d598a]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdbce9)[0x7f208a5d5ce9]
/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so(+0x4fdb98a)[0x7f208a5d598a]
note: rustc unexpectedly overflowed its stack! this is a bug
```
### Anything else?
_No response_ | I-crash,A-lints,T-compiler,C-bug | low | Critical |
2,553,665,143 | go | cmd/go: TestScript/cgo_long_cmd failures | ```
#!watchflakes
default <- pkg == "cmd/go" && test == "TestScript/cgo_long_cmd"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735615552937511233)):
=== RUN TestScript/cgo_long_cmd
=== PAUSE TestScript/cgo_long_cmd
=== CONT TestScript/cgo_long_cmd
script_test.go:139: 2024-09-27T19:29:36Z
script_test.go:141: $WORK=/home/swarming/.swarming/w/ir/x/t/cmd-go-test-3329550913/tmpdir263970715/cgo_long_cmd2600984664
script_test.go:163:
PATH=/home/swarming/.swarming/w/ir/x/t/cmd-go-test-3329550913/tmpdir263970715/testbin:/home/swarming/.swarming/w/ir/x/w/goroot/bin:/home/swarming/.swarming/w/ir/x/w/goroot/bin:/home/swarming/.swarming/w/ir/x/w/goroot/bin:/home/swarming/.swarming/w/ir/cache/tools/bin:/home/swarming/.swarming/w/ir/bbagent_utility_packages:/home/swarming/.swarming/w/ir/bbagent_utility_packages/bin:/home/swarming/.swarming/w/ir/cipd_bin_packages:/home/swarming/.swarming/w/ir/cipd_bin_packages/bin:/home/swarming/.swarming/w/ir/cache/cipd_client:/home/swarming/.swarming/w/ir/cache/cipd_client/bin:/home/swarming/.swarming/cipd_cache/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOME=/no-home
CCACHE_DISABLE=1
GOARCH=ppc64le
...
[condition not met]
# Generate a file with a very long #cgo LDFLAGS line.
# This used to cause "go build" to fail with "argument list too long". (0.336s)
> go generate
# Build with the generated file. (0.281s)
> go build
[stderr]
cgolongcmd: invalid flag in #cgo LDFLAGS:
script_test.go:163: FAIL: testdata/script/cgo_long_cmd.txt:12: go build: exit status 1
--- FAIL: TestScript/cgo_long_cmd (0.62s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,553,672,622 | rust | unrelated trait bound being reported as not satisfied | ### Code
```rust
trait Foo {
type Assoc;
}
trait Bar {}
impl<T> Foo for T where T: Bar {
type Assoc = ();
}
struct Test {
field: <() as Foo>::Assoc,
}
```
### Current output
```
error[E0277]: the trait bound `(): Bar` is not satisfied
--> src/lib.rs:12:12
|
12 | field: <() as Foo>::Assoc,
| ^^^^^^^^^^^^^^^^^^ the trait `Bar` is not implemented for `()`, which is required by `(): Foo`
|
help: this trait has no implementations, consider adding one
--> src/lib.rs:5:1
|
5 | trait Bar {}
| ^^^^^^^^^
note: required for `()` to implement `Foo`
--> src/lib.rs:7:9
|
7 | impl<T> Foo for T where T: Bar {
| ^^^ ^ --- unsatisfied trait bound introduced here
For more information about this error, try `rustc --explain E0277`.
error: could not compile `playground` (lib) due to 1 previous error
```
### Desired output
```
error[E0277]: the trait bound `(): Foo` is not satisfied
--> src/lib.rs:12:12
|
12 | field: <() as Foo>::Assoc,
| ^^^^^^^^^^^^^^^^^^ the trait `Foo` is not implemented for `()`
|
help: this trait has no implementations, consider adding one
--> src/lib.rs:5:1
|
5 | trait Foo {
| ^^^^^^^^^
For more information about this error, try `rustc --explain E0277`.
error: could not compile `playground` (lib) due to 1 previous error
```
### Rationale and extra context
the problem is exceptionally bad when the error arises in macro generated code, where the `as Foo` cannot be seen in the diagnostic. the real trait bound is also hidden further when using `diagnostic::on_unimplemented`:
```rust
#[diagnostic::on_unimplemented(message = "foo", label = "not foo")]
trait Foo {
type Assoc;
}
#[diagnostic::on_unimplemented(message = "bar", label = "not bar")]
trait Bar {}
impl<T> Foo for T where T: Bar {
type Assoc = ();
}
struct Test {
field: <() as Foo>::Assoc,
}
```
gives:
```
error[E0277]: bar
--> src/lib.rs:14:12
|
14 | field: <() as Foo>::Assoc,
| ^^^^^^^^^^^^^^^^^^ not bar
|
= help: the trait `Bar` is not implemented for `()`, which is required by `(): Foo`
help: this trait has no implementations, consider adding one
--> src/lib.rs:7:1
|
7 | trait Bar {}
| ^^^^^^^^^
note: required for `()` to implement `Foo`
--> src/lib.rs:9:9
|
9 | impl<T> Foo for T where T: Bar {
| ^^^ ^ --- unsatisfied trait bound introduced here
For more information about this error, try `rustc --explain E0277`.
error: could not compile `playground` (lib) due to 1 previous error
```
### Other cases
_No response_
### Rust Version
latest stable and nightly, but noticed in my own machine using the following version:
rustc 1.83.0-nightly (6c6d21008 2024-09-22)
binary: rustc
commit-hash: 6c6d210089e4589afee37271862b9f88ba1d7755
commit-date: 2024-09-22
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,553,676,132 | godot | `AudioServer.get_bus_peak_volume_left_db()` does not work in Web export for samples | ### Tested versions
Issue is present in v4.3.stable.official [77dcf97d8]
Issue was not present in v4.2.2.stable.official [15073afe3]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 31.0.15.3623) - AMD Ryzen 5 3600X 6-Core Processor (12 Threads)
### Issue description
This is a video of the minimal reproducible example working in-editor:
https://github.com/user-attachments/assets/0fbdd409-2593-44a7-9fb7-70e0c28fdf98
The following is a photo, but the video would look exactly the same since the value does not update when exported to Web.

### Steps to reproduce
Export to web. Doesn't seem to matter which rendering mode is used.
### Minimal reproduction project (MRP)
This project uses three nodes:
A sound playing on a loop
A label updating with the value of AudioServer.get_bus_peak_volume_left_db(0,0)
A circle redrawing with size depending on the value of AudioServer.get_bus_peak_volume_left_db(0,0).
[repro.zip](https://github.com/user-attachments/files/17169218/repro.zip)
| bug,platform:web,topic:audio | low | Minor |
2,553,714,177 | vscode | source control Revert(>)/Stage(+) change buttons no longer work, or are aligned ot the changed block |
Type: <b>Bug</b>
When using Source Control to view unstaged changed, the Revert(>)/Stage(+) change buttons no longer work, or are aligned ot the changed block.
I used to be able to revert a change with the > button.
Now, nothing happens when I click the > button.
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.22635
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-11370H @ 3.30GHz (8 x 3302)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.84GB (5.99GB free)|
|Process Argv|--crash-reporter-id 23e2bccf-b91f-4850-af1e-24dd3f1afe30|
|Screen Reader|yes|
|VM|0%|
</details><details><summary>Extensions (29)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-sidebar|Acr|1.2.3
Bookmarks|ale|13.5.0
LinkCheckMD|bla|0.3.1
vscode-markdownlint|Dav|0.56.0
docs-article-templates|doc|1.0.7
docs-authoring-pack|doc|1.0.2
docs-images|doc|1.0.4
docs-linting|doc|0.0.13
docs-markdown|doc|1.0.11
docs-metadata|doc|1.0.9
docs-preview|doc|1.0.9
docs-scaffolding|doc|1.0.8
docs-visual-areas|doc|0.2.1
docs-yaml|doc|1.0.5
copilot|Git|1.234.0
copilot-chat|Git|0.20.3
Learn-Training-AI-Assistant|Lea|1.1.50
vscode-azurefunctions|ms-|1.15.4
vscode-azureresourcegroups|ms-|0.9.5
vscode-dotnet-runtime|ms-|2.1.6
data-workspace-vscode|ms-|0.5.0
mssql|ms-|1.24.0
sql-bindings-vscode|ms-|0.4.0
sql-database-projects-vscode|ms-|1.4.3
remote-wsl|ms-|0.88.4
azure-account|ms-|0.12.0
powershell|ms-|2024.2.2
vscode-yaml|red|1.15.0
code-spell-checker|str|3.0.1
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
0ee40948:31013168
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31134774
wkspc-onlycs-t:31132770
nativeloc2:31134642
wkspc-ranged-t:31125599
fje88620:31121564
iacca1:31138162
notype1cf:31143046
```
</details>
<!-- generated by issue reporter --> | bug,diff-editor | low | Critical |
2,553,723,481 | ui | [bug]: npx shadcn init doesn't work when variables are in tailwind.config.js | ### Describe the bug
When running `npx shadcn@latest` in a project which has variables within the tailwind.config.js theme.extend, it fails to run with:
```Error replacing tree: The children of the old and new trees were expected to have the same count (8:21).```
There's an issue with ts-morph.
E.g. with:
```
const defaultTheme = require('tailwindcss/defaultTheme');
export default {
content: ['./src/**/*.{astro,html,js,jsx,md,mdx,svelte,ts,tsx,vue}'],
theme: {
extend: {
fontFamily: {
sans: ['Plus Jakarta Sans Variable', ...defaultTheme.fontFamily.sans],
mono: ['Silkscreen', ...defaultTheme.fontFamily.sans]
},
plugins: [require('tailwindcss-motion')]
};
```
The ...defaultTheme is causing it to fail. I imagine the parser isn't accounting for variables inside the config, but I think it's quite a common setup
### Affected component/components
N/A
### How to reproduce
1. Use above config
2. Run npx shadcn init
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
MacOS / Warp terminal / Cursor
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,553,742,624 | rust | Unhelpful error message "ambiguous lifetime bound, explicit lifetime bound required" | ### Code
```rs
trait MyTrait<'a, 'b> where Self: 'a + 'b {}
pub struct Foo<'a, 'b> {
expr: Box<dyn MyTrait<'a, 'b>>,
}
// Fix:
pub struct Bar<'all, 'a, 'b> where 'all: 'a, 'all: 'b {
expr: Box<dyn MyTrait<'a, 'b> + 'all>,
}
```
### Current output
```
error[E0227]: ambiguous lifetime bound, explicit lifetime bound required
--> src/lib.rs:4:15
|
4 | expr: Box<dyn MyTrait<'a, 'b>>,
| ^^^^^^^^^^^^^^^^^^^
```
For more information about this error, try `rustc --explain E0227`.
### Desired output
_No response_
### Rationale and extra context
_No response_
### Other cases
_No response_
### Rust Version
```
$ rustc -Vv
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
### Anything else?
_No response_ | A-diagnostics,A-lifetimes,T-compiler,D-terse,A-trait-objects | low | Critical |
2,553,744,483 | yt-dlp | Can't download whole youtube channel saved on waybackmachine (web archive) | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Poland
### Provide a description that is worded well enough to be understood
I am trying to download from this page (whole channel): https://web.archive.org/web/20230617094041/https://www.youtube.com/@piotr.f1267/videos
Whenever I try it just says unsupported website. For example if I try to download from this saved channel only single video it downloads without issue (even tho it's the same website): https://web.archive.org/web/20230617094121/https://www.youtube.com/watch?v=h1Iany8Etas
Is there anything to do to fix it? Why does it happen?
Thanks in advance
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU https://web.archive.org/web/20230617094041/https://www.youtube.com/@piotr.f1267/videos
[debug] Command-line config: ['-vU', 'https://web.archive.org/web/20230617094041/https://www.youtube.com/@piotr.f1267/videos']
[debug] Encodings: locale cp1250, fs utf-8, pref cp1250, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (pip)
[debug] Python 3.12.3 (CPython AMD64 64bit) - Windows-11-10.0.22631-SP0 (OpenSSL 3.0.15 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-essentials_build-www.gyan.dev (setts), ffprobe 7.0.2-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.2, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Plugin directories: ['C:\\Users\\xdekhckr\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\yt_dlp_plugins']
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://web.archive.org/web/20230617094041/https://www.youtube.com/@piotr.f1267/videos
[generic] videos: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] videos: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://web.archive.org/web/20230617094041/https://www.youtube.com/@piotr.f1267/videos
Traceback (most recent call last):
File "C:\Users\xdekhckr\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xdekhckr\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "C:\Users\xdekhckr\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\common.py", line 740, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xdekhckr\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\generic.py", line 2526, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://web.archive.org/web/20230617094041/https://www.youtube.com/@piotr.f1267/videos
```
| site-enhancement,triage | low | Critical |
2,553,751,690 | electron | [Feature Request]: Calling `UtilityProcess.kill()` on a killed process should return `true` | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.1.2
### What operating system(s) are you using?
macOS
### Operating System Version
macOS Sonoma 14.7
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
If I call `UtilityProcess.kill()` on a process that has already been killed, `kill()` should return `true`, indicating that the process is gone.
### Actual Behavior
If I call `UtilityProcess.kill()` on a process that has already been killed, it returns `false`, indicating that killing it failed.
### Testcase Gist URL
https://gist.github.com/e29f65a5a72296f1bff6a72d2c5b00bf
### Additional Information
#### Why this is important
At some places in your code, you might not know if your utility process is still alive or not.
It would be helpful to get confirmation whether it's gone when we call `kill()` a second time. | enhancement :sparkles:,platform/macOS,component/utilityProcess,32-x-y | low | Critical |
2,553,761,936 | pytorch | emulate_precision_casts not implemented for cpu | ### ๐ Describe the bug
```
(/home/ezyang/local/c/pytorch-env) [ezyang@devgpu005.nha1 ~/local/c/pytorch (a55aa71b)]$ python t.py
tensor([9.7422], dtype=torch.float16) tensor([9.7344], dtype=torch.float16)
(/home/ezyang/local/c/pytorch-env) [ezyang@devgpu005.nha1 ~/local/c/pytorch (a55aa71b)]$ cat t.py
import torch
import torch._inductor.config
torch._inductor.config.emulate_precision_casts = True
def fn(x, y):
return x * y.to(dtype=torch.float16)
fn_opt = torch._dynamo.optimize("inductor")(fn)
x = torch.tensor([9.734375], dtype=torch.float16)
y = torch.tensor(1.00048828125, dtype=torch.float32)
print(fn_opt(x, y), fn(x,y))
```
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor,oncall: cpu inductor | low | Critical |
2,553,775,109 | pytorch | [export] Failed to save the model using torch.export.save | ### ๐ Describe the bug
Facing this issue when we are saving the Torch-TensorRT compiled module using torch.export.save(). Here's a link to our exporter code. https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/_exporter.py
```py
import torch
from torch.export import Dim
import torch.nn as nn
import torch_tensorrt as torchtrt
import os
import tempfile
class bitwise_and(nn.Module):
def forward(self, lhs_val, rhs_val):
return torch.ops.aten.bitwise_and.Tensor(lhs_val, rhs_val)
dyn_dim = Dim("dyn_dim", min=3, max=6)
lhs = torch.randint(0, 2, (2, 4, 2), dtype=bool, device="cuda")
rhs = torch.randint(0, 2, (4, 2), dtype=bool, device="cuda")
inputs = (lhs, rhs)
torchtrt_inputs = [torchtrt.Input(shape=lhs.shape, dtype=torch.bool),
torchtrt.Input(shape=rhs.shape, dtype=torch.bool)]
mod = bitwise_and()
fx_mod=torch.export.export(mod, inputs, dynamic_shapes={"lhs_val": {1: dyn_dim}, "rhs_val": {0: dyn_dim}})
print(f"lan added fx_mod={fx_mod}")
trt_model = torchtrt.dynamo.compile(fx_mod, inputs=inputs, enable_precisions={torch.bool}, min_block_size=1)
trt_ep_path = os.path.join(tempfile.gettempdir(), "trt.ep")
lhs1 = torch.randint(0, 2, (2, 5, 2), dtype=bool, device="cuda")
rhs1 = torch.randint(0, 2, (5, 2), dtype=bool, device="cuda")
torchtrt.save(trt_model, trt_ep_path, inputs=[lhs1, rhs1])
print(f"lan added saved model to {trt_ep_path}")
loaded_trt_module = torch.export.load(trt_ep_path)
print(f"lan added load model from {trt_ep_path}")
output = loaded_trt_module(lhs1, rhs1)
print(f"lan added got {output=}")
```
Note: I'm not sure if this is a problem, but there's a warning that shows up
```py
WARNING:py.warnings:/home/dperi/Downloads/TensorRT/py/torch_tensorrt/dynamo/_exporter.py:370: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
engine_node = gm.graph.get_attr(engine_name)
WARNING:py.warnings:/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/graph.py:1586: UserWarning: Node _run_on_acc_0_engine target _run_on_acc_0_engine _run_on_acc_0_engine of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does '
```
The way we set TRT engines as a `getattr` is here: https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/_exporter.py#L368-L370
Here's the full error message:
```py
W0927 14:39:57.743000 2685898 site-packages/torch/fx/experimental/symbolic_shapes.py:5257] failed during evaluate_expr(s0 >= 0, hint=True, size_oblivious=False, forcing_spec=False
E0927 14:39:57.744000 2685898 site-packages/torch/fx/experimental/recording.py:298] failed while running evaluate_expr(*(s0 >= 0, True), **{'fx_node': False})
Traceback (most recent call last):
File "/home/dperi/Downloads/TensorRT/test.py", line 32, in <module>
loaded_trt_module = torch.export.load(trt_ep_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/__init__.py", line 569, in load
ep = deserialize(artifact, expected_opset_version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 2445, in deserialize
.deserialize(
^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 2324, in deserialize
.deserialize(
^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1908, in deserialize
self.deserialize_graph(serialized_graph_module.graph)
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1614, in deserialize_graph
meta_val = self.deserialize_tensor_meta(tensor_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1581, in deserialize_tensor_meta
torch.empty_strided(
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1241, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1695, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1342, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 2012, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 176, in constructors
r = func(*args, **new_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_ops.py", line 720, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 479, in expect_size
r = b.expect_true(file, line)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 465, in expect_true
return self.guard_bool(file, line)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 449, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/recording.py", line 262, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5255, in evaluate_expr
return self._evaluate_expr(orig_expr, hint, fx_node, size_oblivious, forcing_spec=forcing_spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5331, in _evaluate_expr
static_expr = self._maybe_evaluate_static(expr,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 1738, in wrapper
return fn_cache(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4686, in _maybe_evaluate_static
r = _maybe_evaluate_static_worker(expr, symbol_info, unbacked_only, size_oblivious)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 1554, in _maybe_evaluate_static_worker
lower = vr.lower
^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'
WARNING:py.warnings:/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/tempfile.py:895: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpg514wz8s'>
_warnings.warn(warn_message, ResourceWarning)
```
cc: @angelayi
### Versions
[pip3] torch==2.6.0.dev20240925+cu124
[pip3] torch_tensorrt==2.6.0.dev0+43eb56053
[pip3] torchmetrics==1.4.0.post0
[pip3] torchprofile==0.0.4
[pip3] torchsurgeon==0.1.2
[pip3] torchvision==0.20.0.dev20240927+cu124
[pip3] triton==3.0.0
cc @ezyang @chauhang @penguinwu @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,module: dynamic shapes,export-triaged,oncall: export | low | Critical |
2,553,796,992 | flutter | No logic in `createRenderObject()` | Considering the test class from #155699:
```dart
class ThemedCard extends SingleChildRenderObjectWidget {
const ThemedCard({super.key}) : super(child: const SizedBox.expand());
@override
RenderPhysicalShape createRenderObject(BuildContext context) {
final CardThemeData cardTheme = CardTheme.of(context).data;
return RenderPhysicalShape(
clipper: ShapeBorderClipper(shape: cardTheme.shape ?? const RoundedRectangleBorder()),
clipBehavior: cardTheme.clipBehavior ?? Clip.antiAlias,
color: cardTheme.color ?? Colors.white,
elevation: cardTheme.elevation ?? 0.0,
shadowColor: cardTheme.shadowColor ?? Colors.black,
);
}
@override
void updateRenderObject(BuildContext context, RenderPhysicalShape renderObject) {
final CardThemeData cardTheme = CardTheme.of(context).data;
renderObject
..clipper = ShapeBorderClipper(shape: cardTheme.shape ?? const RoundedRectangleBorder())
..clipBehavior = cardTheme.clipBehavior ?? Clip.antiAlias
..color = cardTheme.color ?? Colors.white
..elevation = cardTheme.elevation ?? 0.0
..shadowColor = cardTheme.shadowColor ?? Colors.black;
}
}
```
It's pretty great having the ability to hook up a `RenderObjectWidget` straight to an `InheritedWidget`, but having a single source of truth would help to mitigate future bugs.
<br>
## Proposal: make `RenderObject` constructor arguments optional
```dart
class ThemedCard extends PhysicalShape {
const MyWidget({super.key});
@override
void updateRenderObject(BuildContext context, RenderPhysicalShape renderObject) {
// single source of truth!
}
}
```
I should probably reiterate that this isn't strictly necessary for us to do, but it'd still be wonderful. | framework,P3,team-framework,triaged-framework | low | Critical |
2,553,820,468 | rust | add std and core to nightly-rustc docs | [the rustc internal api docs](https://doc.rust-lang.org/nightly/nightly-rustc/) differ from the main `std` docs in two major ways:
1. documenting different crates
2. showing private items
being able to see std-internal items would be quite handy for anyone developing the standard library. | C-feature-request,C-discussion | low | Minor |
2,553,831,826 | rust | ICE: `Failed to normalize Alias(Opaque, AliasTy {` | <!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
mod impl_trait_mod {
use super::*;
pub type OpaqueBlock = impl Trait;
pub type OpaqueIf = impl Trait;
pub struct BlockWrapper(OpaqueBlock);
pub struct IfWrapper(pub OpaqueIf);
pub fn if_impl() -> Parser<OpaqueIf> {
bind(option(block()), |_| block())
}
}
use impl_trait_mod::*;
pub trait Trait {
type Assoc;
}
pub struct Parser<P>(P);
pub struct Bind<P, F>(P, F);
impl<P, F> Trait for Bind<P, F> { type Assoc = (); }
impl Trait for BlockWrapper { type Assoc = (); }
impl Trait for IfWrapper { type Assoc = (); }
pub fn block() -> Parser<BlockWrapper> {
loop {}
}
pub fn option<P: Trait>(arg: Parser<P>) -> Parser<impl Trait> {
bind(arg, |_| block())
}
fn bind<P: Trait, P2, F: Fn(P::Assoc) -> Parser<P2>>(_: Parser<P>, _: F) -> Parser<Bind<P, F>>
{ loop {} }
fn main() {
if_impl().0;
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-pc-windows-msvc
release: 1.81.0
LLVM version: 18.1.7
```
Both nightly and beta versions produced the same result as stable (as far as I can tell).
### Error output
```
error[E0658]: `impl Trait` in type aliases is unstable
--> src/main.rs:3:28
|
3 | pub type OpaqueBlock = impl Trait;
| ^^^^^^^^^^
|
= note: see issue #63063 <https://github.com/rust-lang/rust/issues/63063> for more information
error[E0658]: `impl Trait` in type aliases is unstable
--> src/main.rs:4:25
|
4 | pub type OpaqueIf = impl Trait;
| ^^^^^^^^^^
|
= note: see issue #63063 <https://github.com/rust-lang/rust/issues/63063> for more information
error: unconstrained opaque type
--> src/main.rs:3:28
|
3 | pub type OpaqueBlock = impl Trait;
| ^^^^^^^^^^
|
= note: `OpaqueBlock` must be used in combination with a concrete type within the same module
error: internal compiler error: compiler\rustc_middle\src\ty\normalize_erasing_regions.rs:168:90: Failed to normalize Alias(Opaque, AliasTy { args: [], def_id: DefId(0:46 ~ rustc_hang[6e18]::impl_trait_mod::OpaqueIf::{opaque#0}) }), maybe try to call `try_normalize_erasing_regions` instead
[...backtrace...]
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-pc-windows-msvc
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [mir_drops_elaborated_and_const_checked] elaborating drops for `main`
#1 [analysis] running analysis passes on this crate
end of query stack
For more information about this error, try `rustc --explain E0658`.
error: could not compile `rustc-hang` (bin "rustc-hang") due to 3 previous errors
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at compiler\rustc_middle\src\ty\normalize_erasing_regions.rs:168:90:
Box<dyn Any>
stack backtrace:
0: 0x7ff880eb408d - std::backtrace_rs::backtrace::dbghelp64::trace
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\..\..\backtrace\src\backtrace\dbghelp64.rs:91
1: 0x7ff880eb408d - std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: 0x7ff880eb408d - std::sys::backtrace::_print_fmt
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\sys\backtrace.rs:65
3: 0x7ff880eb408d - std::sys::backtrace::impl$0::print::impl$0::fmt
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\sys\backtrace.rs:40
4: 0x7ff880ee4bb9 - core::fmt::rt::Argument::fmt
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\core\src\fmt\rt.rs:173
5: 0x7ff880ee4bb9 - core::fmt::write
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\core\src\fmt\mod.rs:1182
6: 0x7ff880eaab71 - std::io::Write::write_fmt<std::sys::pal::windows::stdio::Stderr>
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\io\mod.rs:1827
7: 0x7ff880eb7127 - std::panicking::default_hook::closure$1
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\panicking.rs:269
8: 0x7ff880eb6d19 - std::panicking::default_hook
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\panicking.rs:296
9: 0x7ff8770d2ed5 - memchr
10: 0x7ff880eb796b - alloc::boxed::impl$50::call
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\alloc\src\boxed.rs:2084
11: 0x7ff880eb796b - std::panicking::rust_panic_with_hook
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\panicking.rs:808
12: 0x7ff8786c0ea3 - <rustc_hir_pretty[ded6b9e63cf9049a]::State>::print_variant
13: 0x7ff8786b34f9 - <rustc_hir_pretty[ded6b9e63cf9049a]::State>::print_variant
14: 0x7ff8786acad9 - <rustc_hir_pretty[ded6b9e63cf9049a]::State>::print_variant
15: 0x7ff8786cd105 - <rustc_errors[cac40cfc73911857]::diagnostic::BugAbort as rustc_errors[cac40cfc73911857]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
16: 0x7ff8785de6a2 - rustc_middle[1ecd3c849d31efbc]::util::bug::bug_fmt
17: 0x7ff8785bea6d - rustc_middle[1ecd3c849d31efbc]::ty::consts::const_param_default
18: 0x7ff8785be8ad - rustc_middle[1ecd3c849d31efbc]::ty::consts::const_param_default
19: 0x7ff8785de5a2 - rustc_middle[1ecd3c849d31efbc]::util::bug::bug_fmt
20: 0x7ff876f0b9f1 - <rustc_middle[1ecd3c849d31efbc]::ty::normalize_erasing_regions::NormalizeAfterErasingRegionsFolder as rustc_type_ir[4ff8edeed296e029]::fold::TypeFolder<rustc_middle[1ecd3c849d31efbc]::ty::context::TyCtxt>>::fold_ty
21: 0x7ff876007a71 - rustc_monomorphize[9337c2a47ab967d8]::polymorphize::unused_generic_params
22: 0x7ff87610283f - <rustc_mir_transform[9cb31845a5a5293]::elaborate_drops::ElaborateDrops as rustc_middle[1ecd3c849d31efbc]::mir::MirPass>::run_pass
23: 0x7ff87602a36c - <rustc_mir_transform[9cb31845a5a5293]::simplify::SimplifyCfg as rustc_middle[1ecd3c849d31efbc]::mir::MirPass>::run_pass
24: 0x7ff8760e7965 - rustc_mir_transform[9cb31845a5a5293]::mir_drops_elaborated_and_const_checked
25: 0x7ff87674ea7b - rustc_query_impl[6b8c53a45d99a773]::plumbing::query_key_hash_verify_all
26: 0x7ff87669ac49 - rustc_ty_utils[3966f556ce2470bb]::ty::self_ty_of_trait_impl_enabling_order_dep_trait_object_hack
27: 0x7ff876752e7b - rustc_query_impl[6b8c53a45d99a773]::plumbing::query_key_hash_verify_all
28: 0x7ff875c89c15 - rustc_interface[ce19ee65e5c643b1]::passes::analysis
29: 0x7ff875881e4b - rustc_ty_utils[3966f556ce2470bb]::ty::adt_sized_constraint
30: 0x7ff8757f3ea5 - rustc_ty_utils[3966f556ce2470bb]::ty::adt_sized_constraint
31: 0x7ff87588a9f3 - rustc_query_impl[6b8c53a45d99a773]::query_system
32: 0x7ff872ee6a70 - _wpgmptr
33: 0x7ff872ee2e66 - _wpgmptr
34: 0x7ff872eec1ab - _wpgmptr
35: 0x7ff880ec8f0d - alloc::boxed::impl$48::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\alloc\src\boxed.rs:2070
36: 0x7ff880ec8f0d - alloc::boxed::impl$48::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\alloc\src\boxed.rs:2070
37: 0x7ff880ec8f0d - std::sys::pal::windows::thread::impl$0::new::thread_start
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\sys\pal\windows\thread.rs:58
38: 0x7ff8f5d97374 - BaseThreadInitThunk
39: 0x7ff8f733cc91 - RtlUserThreadStart
```
</p>
</details>
Related to
https://github.com/rust-lang/rust/issues/127353 also produces an ICE on stable using features which are supposed to be locked behind `type_alias_impl_trait`. | I-ICE,T-compiler,C-bug,S-bug-has-test | low | Critical |
2,553,841,629 | rust | impl-trait-overcaptures cannot be applied, missing parens | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
` rustc ./tests/ui/impl-trait/dyn-trait-elided-two-inputs-ref-assoc.rs --force-warn impl-trait-overcaptures`
```rust
// Test that we don't get an error with `dyn Bar` in an impl Trait
// when there are multiple inputs. The `dyn Bar` should default to `+
// 'static`. This used to erroneously generate an error (cc #62517).
//
//@ revisions: current next
//@[next] compile-flags: -Znext-solver
//@ ignore-compare-mode-next-solver (explicit revisions)
//@ check-pass
trait Foo {
type Item: ?Sized;
fn item(&self) -> Box<Self::Item> { panic!() }
}
trait Bar { }
impl<T> Foo for T {
type Item = dyn Bar;
}
fn is_static<T>(_: T) where T: 'static { }
fn bar(x: &str) -> &impl Foo<Item = dyn Bar> { &() }
fn main() {
let s = format!("foo");
let r = bar(&s);
is_static(r.item());
}
```
```
warning: `impl Foo<Item = (dyn Bar + 'static)>` will capture more lifetimes than possibly intended in edition 2024
--> ./tests/ui/impl-trait/dyn-trait-elided-two-inputs-ref-assoc.rs:24:21
|
24 | fn bar(x: &str) -> &impl Foo<Item = dyn Bar> { &() }
| ^^^^^^^^^^^^^^^^^^^^^^^^
|
= warning: this changes meaning in Rust 2024
= note: for more information, see <https://doc.rust-lang.org/nightly/edition-guide/rust-2024/rpit-lifetime-capture.html>
note: specifically, this lifetime is in scope but not mentioned in the type's bounds
--> ./tests/ui/impl-trait/dyn-trait-elided-two-inputs-ref-assoc.rs:24:11
|
24 | fn bar(x: &str) -> &impl Foo<Item = dyn Bar> { &() }
| ^
= note: all lifetimes in scope will be captured by `impl Trait`s in edition 2024
= note: requested on the command line with `--force-warn impl-trait-overcaptures`
help: use the precise capturing `use<...>` syntax to make the captures explicit
|
24 | fn bar(x: &str) -> &impl Foo<Item = dyn Bar> + use<> { &() }
| +++++++
```
this does not build:
```
error: ambiguous `+` in a type
--> ./tests/ui/impl-trait/dyn-trait-elided-two-inputs-ref-assoc.rs:24:21
|
24 | fn bar(x: &str) -> &impl Foo<Item = dyn Bar> + use<> { &() }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: try adding parentheses
|
24 | fn bar(x: &str) -> &(impl Foo<Item = dyn Bar> + use<>) { &() }
| + +
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (2bd1e894e 2024-09-26)
binary: rustc
commit-hash: 2bd1e894efde3b6be857ad345914a3b1cea51def
commit-date: 2024-09-26
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
``` | A-diagnostics,T-compiler,A-suggestion-diagnostics,D-papercut,D-invalid-suggestion | low | Critical |
2,553,842,087 | PowerToys | Add Shortcut Feature | ### Description of the new feature / enhancement
# Adding Shortcuts
I'm here to ask for a feature that helps users add shortcuts to different tools. For example, in the taskbar, start menu or desktop.

**The previous image was an example and does not work, every time I click on these "shortcuts" they just play the executing animation (pop in and out) and that's all.**
### Scenario when this would be used?
This would be extremely useful as a power user that works a lot with colour picking and Power Run. Having them on my taskbar would help me use them straight away without having to memorize a specific combination of keys.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,553,851,401 | rust | edition-2024-expr-fragment-specifier requires feature to work (on stable...) | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
#![allow(irrefutable_let_patterns)]
#![warn(edition_2024_expr_fragment_specifier)]
enum Enum<T> { TSVariant(#[allow(dead_code)] T), SVariant { _v: T }, UVariant }
macro_rules! is_variant {
(@check $variant:ident, $matcher:tt, $expr:expr) => (
assert!(if let Enum::$variant::<()> $matcher = $expr { true } else { false },
"expr does not have correct type");
);
}
fn main() {}
```
I expected to see this happen: lint applies successfully
Instead, this happened: lint breaks your build in almost all cases
```
warning: the `expr` fragment specifier will accept more expressions in the 2024 edition
--> ./tests/ui/type-alias-enum-variants/enum-variant-generic-args-pass.rs:8:48
|
8 | (@check $variant:ident, $matcher:tt, $expr:expr) => (
| ^^^^
|
= warning: this changes meaning in Rust 2024
= note: for more information, see Migration Guide <https://doc.rust-lang.org/nightly/edition-guide/rust-2024/macro-fragment-specifiers.html>
note: the lint level is defined here
--> ./tests/ui/type-alias-enum-variants/enum-variant-generic-args-pass.rs:3:9
|
3 | #![warn(edition_2024_expr_fragment_specifier)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
help: to keep the existing behavior, use the `expr_2021` fragment specifier
|
8 | (@check $variant:ident, $matcher:tt, $expr:expr_2021) => (
| ~~~~~~~~~
```
when changed to ` (@check $variant:ident, $matcher:tt, $expr:expr_2021) => (`, the code suddenly requires a feature which was never opted into, even on beta or stable ๐ค
This is weird since the lint is machine applicable.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (2bd1e894e 2024-09-26)
binary: rustc
commit-hash: 2bd1e894efde3b6be857ad345914a3b1cea51def
commit-date: 2024-09-26
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
| A-diagnostics,C-bug | low | Critical |
2,553,917,987 | flutter | [ios][platform view] The raster_time benchmark does not measure the main thread anymore | ### Use case
After https://github.com/flutter/engine/pull/53826 is landed, we are not measuring the main thread anymore in our platform view's benchmark.
For example, `CATransaction::commit`, which is a (pretty big) chunk of work (see picture in https://github.com/flutter/flutter/issues/142815)
### Proposal
Re-enable benchmark for the main thread, either in existing benchmark (e.g. average raster time), or create a new benchmark for that.
| platform-ios,engine,a: platform-views,c: proposal,P2,team-ios,triaged-ios | low | Minor |
2,553,943,356 | tauri | [bug] Frequent hiding and displaying of webviewWindow significantly increases CPU usage | ### Describe the bug
Thanks to the official for proposing the webviewWindow hiding and displaying functions in my last issue and adopting them so quickly. Because my tauri v2 project is used in the production environment, I cannot provide you with video recordings. However, after my comparison, frequent Showing and hiding webviewWindow CPU usage is about 2%-3% higher than changing the position of webviewWindow. so currently I still hide and show by changing the position of webviewWindow, because it takes up much less CPU. Finally, I would like to make a small suggestion: If you can add a option parameter config to webview.show(config), you can configure the display location is even more perfect
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[โ] Environment
- OS: Windows 10.0.19045 X64
โ WebView2: 129.0.2792.52
โ MSVC:
- Visual Studio Enterprise 2022
- Visual Studio ๏ฟฝ๏ฟฝ๏ฟฝษน๏ฟฝ๏ฟฝ๏ฟฝ 2022
โ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
โ Cargo: 1.80.1 (376290515 2024-07-16)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 20.17.0
- yarn: 1.22.19
- npm: 10.8.2
[-] Packages
- tauri [RUST]: 2.0.0-rc.16
- tauri-build [RUST]: 2.0.0-rc.13
- wry [RUST]: 0.44.1
- tao [RUST]: 0.30.2
- @tauri-apps/api [NPM]: 2.0.0-rc.6
- @tauri-apps/cli [NPM]: 1.4.0 (outdated, latest: 1.6.2)
```
### Stack trace
_No response_
### Additional context

| type: bug,status: needs triage | low | Critical |
2,553,945,415 | neovim | Unexpected cursor artifacts when hidden window is focused | ### Problem
This is about having a visible cursor in a non-focused area; it is a visual artifact. This happens with both a text window and the command line.
This is a follow on issue after #30503 is fixed. IMHO, it seems low priority.
This might be considered an artificial situation. It's possible to enter hidden windows through scripts (or command line). Even though, in general, a script shouldn't leave the focus in a hidden window, there's some behavior I'm seeing that might be considered a bug.
Note: this issue is independent of whether or not the hidden window is focusable.
### Steps to reproduce
1. Edit the test file below, move the cursor to the middle of the file.
2. `:sou` creating the hidden focusable window
3. Execute the mapping `Z2`. Moves the focus to the hidden window
Does `vim.api.nvim_set_current_win(popup_wid)`
Observe: Cursor is visible in the main/**non-focused** window.
4. Scroll the window with the mouse so only last line of buffer is visible in the window.
Observe the cursor doesn't move and ends up away from the buffer's text lines.
5. Execute some ":" commands
Observe the cursor move around and get stuck in the command line.
```lua
local popup_wid
local function Pop1Any()
local bnr = vim.api.nvim_create_buf(false, true)
assert(bnr, "Failed to create buffer")
vim.api.nvim_buf_set_lines(bnr, 0, -1, true, {'simple', 'win'})
vim.api.nvim_set_option_value("bufhidden", "wipe", {buf = bnr})
vim.api.nvim_set_option_value("modifiable", true, {buf = bnr})
popup_wid = vim.api.nvim_open_win(bnr, false, {
relative = "editor",
style = "minimal",
width = 10,
height = 2,
focusable = false,
hide = true,
col = 10,
row = 10,
})
end
Pop1Any()
vim.keymap.set('n', 'Z1', function() vim.print(vim.inspect(vim.api.nvim_list_wins())) end)
vim.keymap.set('n', 'Z2', function() vim.api.nvim_set_current_win(popup_wid) end)
vim.keymap.set('n', 'Z3', ":call nvim_set_current_win(" .. popup_wid .. ")<CR>")
vim.keymap.set('n', 'Z4', function() vim.api.nvim_win_set_config(popup_wid, {hide=false}) end)
vim.keymap.set('n', 'Z5', function() vim.api.nvim_win_set_config(popup_wid, {hide=true}) end)
vim.o.tm = 5000
```
### Expected behavior
Not sure. Maybe when hidden is focused, probably no visible cursor in regular; and for command line, only visible while entering a command.
### Nvim version (nvim -v)
NVIM v0.11.0-dev - https://github.com/neovim/neovim/commit/7b71fdbc1e9fcb71e642e67e0ac9a2711dd67df0
### Vim (not Nvim) behaves the same?
NA
### Operating system/version
ubuntu
### Terminal name/version
gnome
### $TERM environment variable
xterm-256color
### Installation
make install | bug,ui,floatwin | low | Critical |
2,553,946,036 | transformers | Add AudioQuestionAnswering pipeline | ### Feature request
A new AudioQuestionAnswering pipeline, just like DQA but instead of providing a document, applying OCR, and doing QA over it, provide audio file, apply STT, and do QA over the transcript. Advanced version includes diarization+STT as speaker annotations provide important context and will improve QA/understanding.
### Motivation
This kind of pipeline is one that I have had to build on multiple occasions for processing audio, specifically phone call recordings. Just like the other pipelines which provide accessibility to some applied ML based pipeline for those to use quickly and easily, this will provide the same thing just for a different modality than what is currently provided.
### Your contribution
I plan to contribute the entire pipeline. My inspiration and what I plan to base a lot of the PR for this pipeline comes from [#18414](https://github.com/huggingface/transformers/pull/18414).
I'm mostly just posting this issue to get feedback from HF team. Tagging @Narsil @NielsRogge as they also provided feedback on the DQA PR. | Feature request | low | Major |
2,553,947,775 | vscode | Git - GIT_ASKPASS fails on concurrent / interleaved username & password requests | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Was unable to verify; `code --disable-extensions` in WSL opened a window that was locked in "untrusted" mode, and this bug is in built-in `git` extension.
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.1
- OS Version: Windows_NT x64 10.0.22631
Parallel `git` invocations which invoke `GIT_ASKPASS` interleave their username and password requests in a way that breaks VSCode's askpass implementation. Only some of the `GIT_ASKPASS` invocations receive a password; others fallback to VSCode's password prompt.
## Steps to Reproduce:
First be sure that GIT_ASKPASS will be called by git.
1. Ensure you do not have a git credential provider/helper configured.
2. Run VSCode in WSL or connected to an SSH remote.
Then run two git commands concurrently, both asking GIT_ASKPASS for the same credentials
1. Open VSCode integrated terminal. Ensure `$GIT_ASKPASS` is set.
2. Substitute your own private git remote: `git ls-remote https://github.com/cspotcode/my-private-repository & ; git ls-remote https://github.com/cspotcode/my-private-repository`
3. Observe that one of the commands succeeds and logs revisions into the terminal. The other, however, opens VSCode's password prompt.
## Diagnosis
I believe this is the problem:
https://github.com/microsoft/vscode/blob/main/extensions/git/src/askpass.ts#L68-L73
The logic here assumes askpass will always be called in sequential username + password requests:
- request username for authority - askpass caches username and password
- request password for authority - askpass returns cached password, clears cache
However, when a tool like `go get` spawns multiple parallel git processes, askpass is called like this:
- request username for authority - askpass caches username and password
- request username for authority - askpass caches username and password
- request password for authority - askpass returns cached password, clears cache
- request password for authority - no credentials in cache; falls back to UI text input prompt | bug,git | low | Critical |
2,553,949,038 | ollama | Better Tool Call parsing | Currently tool call patterns are defined in go templates. this is fine for cases [e.g. in this comment](https://github.com/ollama/ollama/issues/6061#issuecomment-2257137350). However, it is not ideal.
## Problems
1. Content loss
To say, the model responds this text:
```plaintext
Yes, I can help you compute 3+4 with python
<tool_call>
{"name":"python", "args": {"expr":"3+4"}}
</tool_call>
```
In [this line](https://github.com/ollama/ollama/blob/cd5c8f6471abf32965289f0226016a78f0c5c938/server/routes.go#L1480), all content are removed. So if the model provided some information like the first sentence, it is ignored arbitrarily.
2. Streaming
Tool calls does NOT support streaming, it only works when it got the full content. However we all know we can process for tool calls even dont know all content.
3. Other format support
[here](https://github.com/ollama/ollama/blob/d05da2991245cfa0cd8da0bda476c626e26caaec/server/model.go#L301), this function only supports json, shall we support other formats, for example XML, in the future?
## Solution
**TL;DR:** For 1. and 2., we can use [AhoโCorasick algorithm](https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm) for parsing. Define the pattern such as `@@json{name, args}@@` means `{"name":"python", "args": {"expr":"3+4"}}` (e.g. llama3.2) and the pattern `<tool_call>@@json{name,args}@@</tool_call>` could be used for the output above.
When you are not in stream mode we can still process it like a stream. So each time a token received from the state machine, we should do some process to see if the token is part of the tool call syntax. This fundamentally resolves the first problem, because we only remove the part of output that we are sure it is part of tool call.
So I think a state machine should be very nice to "guess" if it is a valid tool call. We should keep in mind that the model may also put out some normal jsons (e.g. user ask it to process a json file), so never regard a json as tool call arbitrarily, we are "guessing" it. When we got a new token, we try to match it with our state machine. **If it matches successfully, it is possible a tool call, so dont send the token to the client and put on hold temporarily.** When we reach the end of the pattern string, e.g. finally matched `</tool_call>` means it is a valid tool call, we can drop all tokens, otherwise, when the state machine fails to match, this json is not a tool call and it should be sent to the client, we then send all tokens and reset the state machine.
Then lets talk about the pattern string. I currently design it like regular expression, all chars not wrapped with `@@` should be matched as is(spaces, \t \r and \n are allowed everywhere in pattern and match string and will be ignored). So `<tool_call>` and `</tool_call>` can be matched.
The model output is NOT reliable. When we test on the qwen model, it sometimes dont output `<tool_call>`, but a random token, though the json is still valid. for these situation we can use `@<match as is>@?` in the pattern string. for example
`@<tool_call>@? @@json{name,args}@@ @</tool_call>@?`
Then lets talk how to ensure the json object is a valid tool call, instead of what the user ask it to put out. Im designing it like this:
`@@json@@` means it is json and matches all josn. you can also specify type of the object to validate it. e.g. `@@json{name}@@` means the `name` field of the json object should neither be `undefined` nor `null`. `@@json{name:string}@@` further specifies the type must be string.
All supported types listed:
|pattern|example|
|---|---|
|any|
|string| "text"|
|number| 114514 |
| [`type`] | `[number]` => [1, 2, 3, 4] |
| {`type of values`} | `{number}` => {"name1": 1, "name2": 5} |
| {`name of field`: `type`} | `{name:string,args:any}` |
Welcome to share your opinions here.
Related: #5796 | feature request | low | Major |
2,553,954,218 | tauri | [bug] Problems with positioning of multiple WebviewWindow | ### Describe the bug
After maximizing the window and then restoring the window, the position of the webviewWindow does not remain the same as the initial location.
```rust
window.add_child(
tauri::webview::WebviewBuilder::new(
"abc",
WebviewUrl::App(format!("abc.html", url.unwrap()).into()),
)
.auto_resize(),
LogicalPosition::new(57, 68),
LogicalSize::new(size.width - 57, size.height - 68),
)?;
```
maximize:

unmaximize:

### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[โ] Environment
- OS: Windows 10.0.19045 X64
โ WebView2: 129.0.2792.52
โ MSVC:
- Visual Studio Enterprise 2022
- Visual Studio ๏ฟฝ๏ฟฝ๏ฟฝษน๏ฟฝ๏ฟฝ๏ฟฝ 2022
โ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
โ Cargo: 1.80.1 (376290515 2024-07-16)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 20.17.0
- yarn: 1.22.19
- npm: 10.8.2
[-] Packages
- tauri [RUST]: 2.0.0-rc.16
- tauri-build [RUST]: 2.0.0-rc.13
- wry [RUST]: 0.44.1
- tao [RUST]: 0.30.2
- @tauri-apps/api [NPM]: 2.0.0-rc.6
- @tauri-apps/cli [NPM]: 1.4.0 (outdated, latest: 1.6.2)
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage,scope: unstable flag | low | Critical |
2,553,961,752 | pytorch | [BUG] torch.tril return ZERO Tensor when tensor is large and in CUDA | ### ๐ Describe the bug
```python
scale_tril = sigma_t.view(-1, 1, 1) * torch.eye(3072, device=mean.device).unsqueeze(0)
scale_tril = repeat(scale_tril, "b p1 p2 -> (b n) p1 p2", n=all_images[start_idx:end_idx].shape[0])
scale_tril2 = sigma_t.view(-1, 1, 1) * torch.eye(5, device=mean.device).unsqueeze(0)
scale_tril2 = repeat(scale_tril2, "b p1 p2 -> (b n) p1 p2", n=all_images[start_idx:end_idx].shape[0])
print(f"{torch.tril(scale_tril)[-1]=}")
print(f"{torch.tril(scale_tril[-1])=}")
print(f"{torch.tril(scale_tril2)[-1]=}")
print(f"{torch.tril(scale_tril2[-1])=}")
print(f"{torch.tril(scale_tril.cpu())[-1]=}")
print(f"{torch.tril(scale_tril[-1].cpu())=}")
print(f"{torch.tril(scale_tril2.cpu())[-1]=}")
print(f"{torch.tril(scale_tril2[-1].cpu())=}")
```
where sigma_t is a float tensor. Then I get following results:
``` bash
torch.tril(scale_tril)[-1]=tensor([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], device='cuda:0')
torch.tril(scale_tril[-1])=tensor([[21.9434, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 21.9434, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 21.9434, ..., 0.0000, 0.0000, 0.0000],
...,
[ 0.0000, 0.0000, 0.0000, ..., 21.9434, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 21.9434, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 21.9434]],
device='cuda:0')
torch.tril(scale_tril2)[-1]=tensor([[21.9434, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 21.9434, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 21.9434, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 21.9434, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 21.9434]], device='cuda:0')
torch.tril(scale_tril2[-1])=tensor([[21.9434, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 21.9434, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 21.9434, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 21.9434, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 21.9434]], device='cuda:0')
torch.tril(scale_tril.cpu())[-1]=tensor([[21.9434, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 21.9434, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 21.9434, ..., 0.0000, 0.0000, 0.0000],
...,
[ 0.0000, 0.0000, 0.0000, ..., 21.9434, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 21.9434, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 21.9434]])
torch.tril(scale_tril[-1].cpu())=tensor([[21.9434, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 21.9434, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 21.9434, ..., 0.0000, 0.0000, 0.0000],
...,
[ 0.0000, 0.0000, 0.0000, ..., 21.9434, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 21.9434, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 21.9434]])
torch.tril(scale_tril2.cpu())[-1]=tensor([[21.9434, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 21.9434, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 21.9434, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 21.9434, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 21.9434]])
torch.tril(scale_tril2[-1].cpu())=tensor([[21.9434, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 21.9434, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 21.9434, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 21.9434, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 21.9434]])
```
when tensor is in CUDA or tensor is relatively small, the results are correct, otherwise, the result is all zero tensor.
### Versions
PyTorch version: 2.1.2+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 525.60.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] DISTS-pytorch==0.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] open-clip-torch==2.19.0
[pip3] pytorch-debayer==1.4.1
[pip3] pytorch-lightning==1.9.5
[pip3] torch==2.1.2+cu118
[pip3] torch-fidelity==0.3.0
[pip3] torchaudio==2.1.2+cu118
[pip3] torchcde==0.2.5
[pip3] torchcfm==1.0.5
[pip3] torchdiffeq==0.2.4
[pip3] torchdyn==1.0.6
[pip3] torchmetrics==1.0.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.16.2+cu118
[pip3] triton==2.1.0
[conda] dists-pytorch 0.1 pypi_0 pypi
[conda] numpy 1.24.1 pypi_0 pypi
[conda] open-clip-torch 2.19.0 pypi_0 pypi
[conda] pytorch-debayer 1.4.1 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] torch 2.1.2+cu118 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchaudio 2.1.2+cu118 pypi_0 pypi
[conda] torchcde 0.2.5 pypi_0 pypi
[conda] torchcfm 1.0.5 pypi_0 pypi
[conda] torchdiffeq 0.2.4 pypi_0 pypi
[conda] torchdyn 1.0.6 pypi_0 pypi
[conda] torchmetrics 1.0.1 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.16.2+cu118 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | triaged,module: linear algebra | low | Critical |
2,553,973,054 | godot | Cannot Access/Modify Debug Arrow of one-way CollisionShape2Ds | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
- Reproducible in v4.2.stable.official [46dc27791]
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 31.0.15.3758) - AMD Ryzen 5 1600 Six-Core Processor (12 Threads)
### Issue description

_Visual Description of Image: A blank Godot project with a single StaticBody2D node. It has a single child, a CollisionShape2D. Because the collision shape has the one-way property set to true, the screenshot shows an arrow underneath it. This arrow is highly distorted as a result of the scaling performed on the CollisionShape2D node and the relative size of the RectangleShape2D node._
**Issue description**: Visually, the editor is working as intended. The issue is that I'm making a tool based on StaticBody2D, and I cannot access the arrow polygon or modify it.
This could potentially be considered a feature proposal. But there is a chance that the arrow is not supposed to be drawn this large in any circumstance, so I'm submitting this as a general issue.
### Steps to reproduce
You can use the MRP to see this outcome, but here are the steps to reproduce:
1. Create StaticBody2D and child CollisionShape2D with RectangleShape2D for example
2. In StaticBody2D, enable one-way collision.
3. In CollisionShape2D, specify the scale to ( 300, 20 )
4. In RectangleShape2D, specify the size to ( 1, 1 )
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/17171980/MRP.zip)
| bug,enhancement,topic:editor,topic:2d | low | Critical |
2,553,984,441 | terminal | While typing in Japanese IME, the text color changes depending on the condition. | ### Windows Terminal version
1.21.2361.0
### Windows build number
10.0.22631.0
### Other Software
- fish shell 3.7.1 (inside WSL)
- PowerShell 7.4.5
### Steps to reproduce
1. Japanese IME On
2. Input Japanese
### Expected Behavior
The text color during input is fixed to the foreground color.
### Actual Behavior
The text color during input is changed by conditions.
I don't know which color the text color will be under which conditions.
https://github.com/user-attachments/assets/9fe58477-2a00-40c5-9f1e-75682fe69139
https://github.com/user-attachments/assets/804f4968-84be-4955-92c6-8d8f508a9e47
| Area-Rendering,Issue-Bug,Product-Terminal | low | Minor |
2,554,018,441 | pytorch | ONNX export: w=tensor.shape() in torch.arange(end=w...) causes "tensor does not have a device" error | ### ๐ Describe the bug
The error looks like this:
```
Traceback (most recent call last):
File "/mnt/dingus_drive/catid/train_detector/onnx_export.py", line 119, in export_to_onnx
torch.onnx.export(
File "/home/saronic/miniconda3/envs/train/lib/python3.10/site-packages/torch/onnx/utils.py", line 551, in export
_export(
File "/home/saronic/miniconda3/envs/train/lib/python3.10/site-packages/torch/onnx/utils.py", line 1648, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/saronic/miniconda3/envs/train/lib/python3.10/site-packages/torch/onnx/utils.py", line 1174, in _model_to_graph
graph = _optimize_graph(
File "/home/saronic/miniconda3/envs/train/lib/python3.10/site-packages/torch/onnx/utils.py", line 714, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/home/saronic/miniconda3/envs/train/lib/python3.10/site-packages/torch/onnx/utils.py", line 1997, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "/home/saronic/miniconda3/envs/train/lib/python3.10/site-packages/torch/onnx/symbolic_opset11.py", line 864, in arange
return g.op("Range", start_default, end, delta_default)
File "/home/saronic/miniconda3/envs/train/lib/python3.10/site-packages/torch/onnx/_internal/jit_utils.py", line 93, in op
return _add_op(self, opname, *raw_args, outputs=outputs, **kwargs)
File "/home/saronic/miniconda3/envs/train/lib/python3.10/site-packages/torch/onnx/_internal/jit_utils.py", line 252, in _add_op
node = _create_node(
File "/home/saronic/miniconda3/envs/train/lib/python3.10/site-packages/torch/onnx/_internal/jit_utils.py", line 314, in _create_node
_C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
RuntimeError: tensor does not have a device
```
Seems like the ONNX export C++ code does not have support for this particular thing:
If you take a tensor.shape, and use the result as the end= parameter for torch.arange() it leads to this internal error. I'm working around it like this, explicitly creating a tensor object from the shape:
```
shapes = torch.tensor([(f.shape[2], f.shape[3]) for f in thing], device=device)
```
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-169-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 560.28.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 8
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
Stepping: 6
CPU MHz: 1999.999
BogoMIPS: 3999.99
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 192 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid md_clear arch_capabilities
Versions of relevant libraries:
[pip3] adam-atan2-pytorch==0.0.12
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxoptimizer==0.3.13
[pip3] onnxruntime-gpu==1.19.2
[pip3] onnxslim==0.1.34
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.4.1
[pip3] torch-optimizer==0.3.1a0
[pip3] torchvision==0.19.1
[pip3] triton==3.0.0
[conda] adam-atan2-pytorch 0.0.12 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-ranger 0.1.1 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] torch-optimizer 0.3.1a0 pypi_0 pypi
[conda] torchvision 0.19.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | module: onnx,triaged | low | Critical |
2,554,023,259 | godot | [Performance] AudioStreamImporter waveform view extremely slow when maximized | ### Tested versions
Reproducible in: Current `master`
### System information
Ubuntu 24.04 - Godot 4.3 Dev (`master`)
### Issue description
Ideally, moving the timeline marker around the waveform should be fast as other audio editors do it.
### Steps to reproduce
- Import any audio file that's decently big (about 2.5 - 3 mins long).
- Maximize the audio importer dialog.
- Try toggling the controls or moving the timeline marker.
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/17172909/mrp.zip) | bug,topic:audio,performance | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.